Test Report: Hyperkit_macOS 19326

                    
                      35e58bd4f2346c2fce1feaa9162990386c1fdc2b:2024-07-25:35495
                    
                

Test fail (18/330)

x
+
TestOffline (195.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-633000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-633000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.969494542s)

                                                
                                                
-- stdout --
	* [offline-docker-633000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-633000" primary control-plane node in "offline-docker-633000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-633000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:21.147765    5965 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:21.147948    5965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:21.147954    5965 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:21.147958    5965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:21.148130    5965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:20:21.150276    5965 out.go:298] Setting JSON to false
	I0725 11:20:21.176320    5965 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4791,"bootTime":1721926830,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 11:20:21.176449    5965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:21.232306    5965 out.go:177] * [offline-docker-633000] minikube v1.33.1 on Darwin 14.5
	I0725 11:20:21.273663    5965 notify.go:220] Checking for updates...
	I0725 11:20:21.298607    5965 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:21.358563    5965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 11:20:21.391625    5965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 11:20:21.413458    5965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:21.436476    5965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:20:21.457376    5965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:21.478784    5965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:21.507608    5965 out.go:177] * Using the hyperkit driver based on user configuration
	I0725 11:20:21.549830    5965 start.go:297] selected driver: hyperkit
	I0725 11:20:21.549860    5965 start.go:901] validating driver "hyperkit" against <nil>
	I0725 11:20:21.549884    5965 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:21.554831    5965 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:21.554948    5965 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 11:20:21.563144    5965 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 11:20:21.566876    5965 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:20:21.566896    5965 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 11:20:21.566927    5965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:21.567152    5965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:21.567203    5965 cni.go:84] Creating CNI manager for ""
	I0725 11:20:21.567217    5965 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:20:21.567229    5965 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:20:21.567292    5965 start.go:340] cluster config:
	{Name:offline-docker-633000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:21.567373    5965 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:21.614624    5965 out.go:177] * Starting "offline-docker-633000" primary control-plane node in "offline-docker-633000" cluster
	I0725 11:20:21.656278    5965 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:21.656341    5965 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 11:20:21.656360    5965 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:21.656464    5965 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 11:20:21.656473    5965 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:21.656723    5965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/offline-docker-633000/config.json ...
	I0725 11:20:21.656742    5965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/offline-docker-633000/config.json: {Name:mkaf4e0743f2428c921e471fa663158240e9bb86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:21.657072    5965 start.go:360] acquireMachinesLock for offline-docker-633000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:21.657132    5965 start.go:364] duration metric: took 46.84µs to acquireMachinesLock for "offline-docker-633000"
	I0725 11:20:21.657153    5965 start.go:93] Provisioning new machine with config: &{Name:offline-docker-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:21.657207    5965 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:20:21.678503    5965 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:20:21.678660    5965 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:20:21.678700    5965 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:20:21.687504    5965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53798
	I0725 11:20:21.687875    5965 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:20:21.688285    5965 main.go:141] libmachine: Using API Version  1
	I0725 11:20:21.688298    5965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:20:21.688525    5965 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:20:21.688630    5965 main.go:141] libmachine: (offline-docker-633000) Calling .GetMachineName
	I0725 11:20:21.688721    5965 main.go:141] libmachine: (offline-docker-633000) Calling .DriverName
	I0725 11:20:21.688827    5965 start.go:159] libmachine.API.Create for "offline-docker-633000" (driver="hyperkit")
	I0725 11:20:21.688852    5965 client.go:168] LocalClient.Create starting
	I0725 11:20:21.688889    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:20:21.688948    5965 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:21.688961    5965 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:21.689039    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:20:21.689078    5965 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:21.689090    5965 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:21.689104    5965 main.go:141] libmachine: Running pre-create checks...
	I0725 11:20:21.689112    5965 main.go:141] libmachine: (offline-docker-633000) Calling .PreCreateCheck
	I0725 11:20:21.689182    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:21.689368    5965 main.go:141] libmachine: (offline-docker-633000) Calling .GetConfigRaw
	I0725 11:20:21.700064    5965 main.go:141] libmachine: Creating machine...
	I0725 11:20:21.700092    5965 main.go:141] libmachine: (offline-docker-633000) Calling .Create
	I0725 11:20:21.700327    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:21.700606    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:20:21.700299    5988 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:20:21.700742    5965 main.go:141] libmachine: (offline-docker-633000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:20:22.166201    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:20:22.166104    5988 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/id_rsa...
	I0725 11:20:22.264373    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:20:22.264277    5988 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk...
	I0725 11:20:22.264391    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Writing magic tar header
	I0725 11:20:22.264406    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Writing SSH key tar header
	I0725 11:20:22.264788    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:20:22.264740    5988 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000 ...
	I0725 11:20:22.729588    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:22.729635    5965 main.go:141] libmachine: (offline-docker-633000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid
	I0725 11:20:22.729741    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Using UUID a1b39087-1095-42e0-9fa3-2a5ffd18c8f1
	I0725 11:20:22.960167    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Generated MAC c6:46:e6:d8:20:54
	I0725 11:20:22.960186    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000
	I0725 11:20:22.960232    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a1b39087-1095-42e0-9fa3-2a5ffd18c8f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0725 11:20:22.960268    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a1b39087-1095-42e0-9fa3-2a5ffd18c8f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0725 11:20:22.960339    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a1b39087-1095-42e0-9fa3-2a5ffd18c8f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage,
/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000"}
	I0725 11:20:22.960386    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a1b39087-1095-42e0-9fa3-2a5ffd18c8f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machi
nes/offline-docker-633000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000"
	I0725 11:20:22.960401    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:20:22.963414    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 DEBUG: hyperkit: Pid is 6015
	I0725 11:20:22.963865    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 0
	I0725 11:20:22.963881    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:22.963954    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:22.965004    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:22.965097    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:22.965115    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:22.965140    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:22.965149    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:22.965163    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:22.965182    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:22.965192    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:22.965203    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:22.965210    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:22.965216    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:22.965240    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:22.965262    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:22.965324    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:22.965351    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:22.965359    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:22.965393    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:22.965413    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:22.965439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:22.965462    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:22.971451    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:20:23.105340    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:20:23.105958    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:20:23.105973    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:20:23.105980    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:20:23.105986    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:20:23.484889    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:20:23.484906    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:20:23.599790    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:20:23.599819    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:20:23.599829    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:20:23.599848    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:20:23.600714    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:20:23.600751    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:20:24.965403    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 1
	I0725 11:20:24.965417    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:24.965506    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:24.966379    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:24.966408    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:24.966421    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:24.966433    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:24.966439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:24.966461    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:24.966487    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:24.966504    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:24.966527    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:24.966547    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:24.966563    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:24.966579    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:24.966588    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:24.966594    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:24.966601    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:24.966609    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:24.966615    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:24.966625    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:24.966632    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:24.966638    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:26.968315    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 2
	I0725 11:20:26.968342    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:26.968434    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:26.969220    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:26.969263    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:26.969272    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:26.969284    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:26.969291    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:26.969307    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:26.969317    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:26.969325    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:26.969333    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:26.969352    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:26.969364    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:26.969372    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:26.969388    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:26.969396    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:26.969405    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:26.969417    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:26.969427    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:26.969434    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:26.969442    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:26.969457    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:28.969516    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 3
	I0725 11:20:28.969534    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:28.969599    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:28.970417    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:28.970477    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:28.970486    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:28.970498    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:28.970507    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:28.970516    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:28.970525    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:28.970541    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:28.970554    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:28.970562    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:28.970570    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:28.970577    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:28.970583    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:28.970590    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:28.970598    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:28.970617    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:28.970630    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:28.970645    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:28.970657    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:28.970666    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:28.988102    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0725 11:20:28.988223    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0725 11:20:28.988234    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0725 11:20:29.008553    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:20:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0725 11:20:30.972194    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 4
	I0725 11:20:30.972214    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:30.972323    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:30.973148    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:30.973213    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:30.973228    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:30.973241    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:30.973259    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:30.973273    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:30.973286    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:30.973294    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:30.973301    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:30.973318    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:30.973326    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:30.973338    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:30.973348    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:30.973356    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:30.973363    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:30.973369    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:30.973375    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:30.973381    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:30.973387    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:30.973393    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:32.974877    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 5
	I0725 11:20:32.974892    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:32.974955    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:32.975735    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:32.975772    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:32.975780    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:32.975796    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:32.975803    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:32.975831    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:32.975854    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:32.975867    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:32.975884    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:32.975896    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:32.975904    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:32.975913    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:32.975919    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:32.975925    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:32.975941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:32.975953    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:32.975961    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:32.975975    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:32.975983    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:32.975991    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:34.976047    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 6
	I0725 11:20:34.976059    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:34.976119    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:34.976953    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:34.976998    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:34.977010    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:34.977019    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:34.977026    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:34.977041    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:34.977046    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:34.977053    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:34.977058    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:34.977078    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:34.977089    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:34.977108    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:34.977122    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:34.977135    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:34.977145    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:34.977153    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:34.977160    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:34.977174    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:34.977187    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:34.977214    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:36.977499    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 7
	I0725 11:20:36.977511    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:36.977586    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:36.978370    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:36.978416    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:36.978426    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:36.978435    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:36.978443    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:36.978449    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:36.978462    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:36.978478    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:36.978487    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:36.978505    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:36.978518    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:36.978526    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:36.978535    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:36.978543    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:36.978551    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:36.978560    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:36.978567    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:36.978579    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:36.978586    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:36.978593    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:38.978724    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 8
	I0725 11:20:38.978739    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:38.978881    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:38.979727    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:38.979737    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:38.979746    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:38.979753    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:38.979767    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:38.979793    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:38.979802    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:38.979811    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:38.979819    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:38.979825    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:38.979839    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:38.979852    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:38.979860    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:38.979868    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:38.979875    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:38.979883    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:38.979896    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:38.979910    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:38.979920    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:38.979934    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:40.980301    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 9
	I0725 11:20:40.980317    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:40.980398    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:40.981170    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:40.981218    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:40.981233    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:40.981253    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:40.981266    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:40.981273    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:40.981282    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:40.981289    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:40.981297    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:40.981304    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:40.981312    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:40.981319    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:40.981326    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:40.981333    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:40.981341    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:40.981349    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:40.981356    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:40.981363    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:40.981368    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:40.981375    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:42.982186    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 10
	I0725 11:20:42.982201    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:42.982235    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:42.983095    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:42.983140    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:42.983156    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:42.983168    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:42.983180    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:42.983190    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:42.983198    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:42.983205    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:42.983210    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:42.983222    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:42.983239    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:42.983248    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:42.983264    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:42.983272    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:42.983279    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:42.983286    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:42.983292    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:42.983299    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:42.983306    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:42.983313    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:44.983622    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 11
	I0725 11:20:44.983649    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:44.983731    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:44.984496    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:44.984549    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:44.984577    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:44.984602    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:44.984610    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:44.984622    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:44.984628    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:44.984635    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:44.984642    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:44.984648    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:44.984656    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:44.984663    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:44.984670    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:44.984676    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:44.984683    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:44.984691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:44.984722    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:44.984731    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:44.984739    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:44.984754    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:46.985671    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 12
	I0725 11:20:46.985691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:46.985780    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:46.986536    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:46.986571    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:46.986580    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:46.986602    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:46.986615    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:46.986623    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:46.986629    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:46.986644    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:46.986652    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:46.986660    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:46.986669    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:46.986681    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:46.986694    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:46.986707    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:46.986715    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:46.986744    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:46.986758    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:46.986767    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:46.986774    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:46.986783    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:48.987592    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 13
	I0725 11:20:48.987606    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:48.987664    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:48.988447    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:48.988524    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:48.988537    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:48.988549    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:48.988559    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:48.988571    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:48.988579    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:48.988592    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:48.988610    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:48.988630    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:48.988643    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:48.988651    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:48.988661    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:48.988667    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:48.988675    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:48.988682    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:48.988689    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:48.988696    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:48.988705    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:48.988718    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:50.990720    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 14
	I0725 11:20:50.990733    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:50.990785    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:50.991754    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:50.991798    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:50.991810    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:50.991820    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:50.991826    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:50.991837    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:50.991848    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:50.991870    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:50.991879    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:50.991888    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:50.991895    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:50.991901    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:50.991909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:50.991918    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:50.991926    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:50.991933    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:50.991941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:50.991948    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:50.991956    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:50.991964    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:52.994072    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 15
	I0725 11:20:52.994088    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:52.994167    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:52.995017    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:52.995072    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:52.995080    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:52.995106    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:52.995121    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:52.995132    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:52.995148    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:52.995160    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:52.995168    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:52.995175    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:52.995182    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:52.995191    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:52.995212    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:52.995223    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:52.995239    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:52.995250    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:52.995259    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:52.995266    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:52.995273    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:52.995281    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:54.996379    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 16
	I0725 11:20:54.996405    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:54.996489    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:54.997255    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:54.997312    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:54.997327    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:54.997364    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:54.997382    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:54.997395    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:54.997404    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:54.997411    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:54.997418    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:54.997436    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:54.997443    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:54.997450    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:54.997457    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:54.997463    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:54.997472    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:54.997478    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:54.997485    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:54.997491    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:54.997497    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:54.997507    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:56.997841    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 17
	I0725 11:20:56.997859    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:56.997934    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:56.998709    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:56.998762    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:56.998776    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:56.998784    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:56.998792    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:56.998804    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:56.998814    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:56.998828    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:56.998840    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:56.998857    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:56.998868    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:56.998878    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:56.998888    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:56.998899    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:56.998907    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:56.998916    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:56.998923    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:56.998937    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:56.998951    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:56.998960    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:20:59.000953    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 18
	I0725 11:20:59.000966    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:20:59.000982    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:20:59.001796    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:20:59.001845    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:20:59.001853    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:20:59.001862    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:20:59.001868    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:20:59.001875    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:20:59.001881    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:20:59.001889    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:20:59.001895    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:20:59.001901    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:20:59.001910    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:20:59.001918    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:20:59.001926    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:20:59.001934    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:20:59.001941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:20:59.001948    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:20:59.001963    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:20:59.001981    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:20:59.001994    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:20:59.002004    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:01.003663    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 19
	I0725 11:21:01.003678    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:01.003715    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:01.004510    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:01.004570    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:01.004579    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:01.004586    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:01.004593    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:01.004603    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:01.004614    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:01.004620    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:01.004627    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:01.004643    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:01.004650    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:01.004658    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:01.004669    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:01.004677    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:01.004684    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:01.004691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:01.004698    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:01.004705    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:01.004713    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:01.004721    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:03.005091    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 20
	I0725 11:21:03.005105    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:03.005194    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:03.005970    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:03.006039    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:03.006059    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:03.006067    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:03.006077    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:03.006085    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:03.006091    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:03.006105    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:03.006122    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:03.006134    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:03.006149    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:03.006161    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:03.006174    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:03.006183    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:03.006190    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:03.006197    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:03.006204    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:03.006210    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:03.006216    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:03.006224    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:05.008239    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 21
	I0725 11:21:05.008254    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:05.008301    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:05.009092    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:05.009161    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:05.009172    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:05.009181    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:05.009189    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:05.009203    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:05.009210    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:05.009236    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:05.009250    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:05.009268    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:05.009280    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:05.009288    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:05.009296    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:05.009303    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:05.009311    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:05.009323    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:05.009329    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:05.009338    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:05.009345    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:05.009352    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:07.010578    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 22
	I0725 11:21:07.010593    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:07.010689    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:07.011789    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:07.011838    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:07.011856    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:07.011872    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:07.011885    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:07.011895    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:07.011909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:07.011926    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:07.011934    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:07.011941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:07.011949    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:07.011956    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:07.011967    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:07.011988    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:07.011998    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:07.012006    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:07.012012    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:07.012032    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:07.012055    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:07.012069    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:09.012896    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 23
	I0725 11:21:09.012912    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:09.012997    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:09.013796    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:09.013853    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:09.013866    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:09.013875    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:09.013882    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:09.013889    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:09.013896    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:09.013903    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:09.013909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:09.013927    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:09.013934    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:09.013954    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:09.013966    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:09.013974    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:09.013983    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:09.013991    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:09.013999    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:09.014005    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:09.014012    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:09.014021    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:11.014375    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 24
	I0725 11:21:11.014389    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:11.014444    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:11.015475    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:11.015511    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:11.015518    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:11.015528    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:11.015544    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:11.015559    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:11.015574    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:11.015587    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:11.015595    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:11.015603    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:11.015611    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:11.015620    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:11.015628    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:11.015635    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:11.015641    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:11.015657    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:11.015665    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:11.015673    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:11.015681    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:11.015693    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:13.016909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 25
	I0725 11:21:13.016924    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:13.016997    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:13.017828    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:13.017865    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:13.017877    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:13.017905    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:13.017916    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:13.017928    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:13.017935    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:13.017942    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:13.017948    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:13.017954    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:13.017966    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:13.017976    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:13.017986    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:13.017995    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:13.018001    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:13.018009    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:13.018019    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:13.018028    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:13.018035    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:13.018040    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:15.018139    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 26
	I0725 11:21:15.018152    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:15.018253    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:15.019076    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:15.019119    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:15.019129    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:15.019137    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:15.019144    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:15.019166    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:15.019172    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:15.019187    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:15.019200    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:15.019220    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:15.019237    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:15.019245    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:15.019254    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:15.019260    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:15.019266    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:15.019282    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:15.019293    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:15.019302    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:15.019312    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:15.019324    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:17.021294    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 27
	I0725 11:21:17.021307    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:17.021363    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:17.022139    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:17.022193    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:17.022205    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:17.022214    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:17.022226    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:17.022235    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:17.022241    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:17.022248    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:17.022259    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:17.022269    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:17.022279    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:17.022286    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:17.022293    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:17.022300    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:17.022307    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:17.022326    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:17.022333    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:17.022343    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:17.022350    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:17.022358    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:19.022378    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 28
	I0725 11:21:19.022400    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:19.022465    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:19.023258    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:19.023303    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:19.023319    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:19.023334    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:19.023348    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:19.023364    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:19.023384    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:19.023393    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:19.023403    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:19.023423    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:19.023436    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:19.023446    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:19.023458    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:19.023466    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:19.023472    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:19.023479    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:19.023489    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:19.023502    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:19.023512    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:19.023529    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:21.025488    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 29
	I0725 11:21:21.025503    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:21.025564    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:21.026332    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for c6:46:e6:d8:20:54 in /var/db/dhcpd_leases ...
	I0725 11:21:21.026397    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:21.026409    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:21.026417    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:21.026423    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:21.026432    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:21.026437    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:21.026444    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:21.026450    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:21.026466    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:21.026476    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:21.026483    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:21.026491    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:21.026505    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:21.026519    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:21.026527    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:21.026534    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:21.026551    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:21.026564    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:21.026582    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:23.027511    5965 client.go:171] duration metric: took 1m1.337486555s to LocalClient.Create
	I0725 11:21:25.029532    5965 start.go:128] duration metric: took 1m3.371110778s to createHost
	I0725 11:21:25.029567    5965 start.go:83] releasing machines lock for "offline-docker-633000", held for 1m3.371230556s
	W0725 11:21:25.029631    5965 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:46:e6:d8:20:54
	I0725 11:21:25.029948    5965 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:21:25.029970    5965 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:21:25.039405    5965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53835
	I0725 11:21:25.039884    5965 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:21:25.040325    5965 main.go:141] libmachine: Using API Version  1
	I0725 11:21:25.040337    5965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:21:25.040627    5965 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:21:25.040995    5965 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:21:25.041055    5965 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:21:25.049589    5965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53837
	I0725 11:21:25.049925    5965 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:21:25.050271    5965 main.go:141] libmachine: Using API Version  1
	I0725 11:21:25.050289    5965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:21:25.050532    5965 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:21:25.050685    5965 main.go:141] libmachine: (offline-docker-633000) Calling .GetState
	I0725 11:21:25.050774    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.050851    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:25.051863    5965 main.go:141] libmachine: (offline-docker-633000) Calling .DriverName
	I0725 11:21:25.113952    5965 out.go:177] * Deleting "offline-docker-633000" in hyperkit ...
	I0725 11:21:25.135074    5965 main.go:141] libmachine: (offline-docker-633000) Calling .Remove
	I0725 11:21:25.135224    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.135231    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.135301    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:25.136259    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.136329    5965 main.go:141] libmachine: (offline-docker-633000) DBG | waiting for graceful shutdown
	I0725 11:21:26.136596    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:26.136698    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:26.137621    5965 main.go:141] libmachine: (offline-docker-633000) DBG | waiting for graceful shutdown
	I0725 11:21:27.139767    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:27.139853    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:27.141468    5965 main.go:141] libmachine: (offline-docker-633000) DBG | waiting for graceful shutdown
	I0725 11:21:28.143037    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:28.143095    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:28.143840    5965 main.go:141] libmachine: (offline-docker-633000) DBG | waiting for graceful shutdown
	I0725 11:21:29.144523    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:29.144601    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:29.145169    5965 main.go:141] libmachine: (offline-docker-633000) DBG | waiting for graceful shutdown
	I0725 11:21:30.147064    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:30.147205    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6015
	I0725 11:21:30.148162    5965 main.go:141] libmachine: (offline-docker-633000) DBG | sending sigkill
	I0725 11:21:30.148171    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:30.160082    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:21:30 WARN : hyperkit: failed to read stderr: EOF
	I0725 11:21:30.160113    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:21:30 WARN : hyperkit: failed to read stdout: EOF
	W0725 11:21:30.178313    5965 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:46:e6:d8:20:54
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:46:e6:d8:20:54
	I0725 11:21:30.178334    5965 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:35.178949    5965 start.go:360] acquireMachinesLock for offline-docker-633000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:22:27.884258    5965 start.go:364] duration metric: took 52.704276165s to acquireMachinesLock for "offline-docker-633000"
	I0725 11:22:27.884287    5965 start.go:93] Provisioning new machine with config: &{Name:offline-docker-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:22:27.884336    5965 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:22:27.906068    5965 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:22:27.906145    5965 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:22:27.906168    5965 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:22:27.914744    5965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53845
	I0725 11:22:27.915070    5965 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:22:27.915402    5965 main.go:141] libmachine: Using API Version  1
	I0725 11:22:27.915411    5965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:22:27.915629    5965 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:22:27.915760    5965 main.go:141] libmachine: (offline-docker-633000) Calling .GetMachineName
	I0725 11:22:27.915860    5965 main.go:141] libmachine: (offline-docker-633000) Calling .DriverName
	I0725 11:22:27.915969    5965 start.go:159] libmachine.API.Create for "offline-docker-633000" (driver="hyperkit")
	I0725 11:22:27.915982    5965 client.go:168] LocalClient.Create starting
	I0725 11:22:27.916010    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:22:27.916061    5965 main.go:141] libmachine: Decoding PEM data...
	I0725 11:22:27.916070    5965 main.go:141] libmachine: Parsing certificate...
	I0725 11:22:27.916108    5965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:22:27.916146    5965 main.go:141] libmachine: Decoding PEM data...
	I0725 11:22:27.916158    5965 main.go:141] libmachine: Parsing certificate...
	I0725 11:22:27.916171    5965 main.go:141] libmachine: Running pre-create checks...
	I0725 11:22:27.916175    5965 main.go:141] libmachine: (offline-docker-633000) Calling .PreCreateCheck
	I0725 11:22:27.916250    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.916305    5965 main.go:141] libmachine: (offline-docker-633000) Calling .GetConfigRaw
	I0725 11:22:27.947915    5965 main.go:141] libmachine: Creating machine...
	I0725 11:22:27.947924    5965 main.go:141] libmachine: (offline-docker-633000) Calling .Create
	I0725 11:22:27.948018    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.948164    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:22:27.948013    6250 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:22:27.948197    5965 main.go:141] libmachine: (offline-docker-633000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:22:28.177845    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:22:28.177746    6250 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/id_rsa...
	I0725 11:22:28.383296    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:22:28.383205    6250 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk...
	I0725 11:22:28.383309    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Writing magic tar header
	I0725 11:22:28.383322    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Writing SSH key tar header
	I0725 11:22:28.383910    5965 main.go:141] libmachine: (offline-docker-633000) DBG | I0725 11:22:28.383858    6250 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000 ...
	I0725 11:22:28.758739    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:28.758760    5965 main.go:141] libmachine: (offline-docker-633000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid
	I0725 11:22:28.758791    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Using UUID c8227c4a-ef72-4806-a312-b8cb28b13a77
	I0725 11:22:28.788081    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Generated MAC b6:df:3c:e2:20:bd
	I0725 11:22:28.788099    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000
	I0725 11:22:28.788127    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c8227c4a-ef72-4806-a312-b8cb28b13a77", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00013c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0725 11:22:28.788152    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c8227c4a-ef72-4806-a312-b8cb28b13a77", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00013c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0725 11:22:28.788213    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c8227c4a-ef72-4806-a312-b8cb28b13a77", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage,
/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000"}
	I0725 11:22:28.788265    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c8227c4a-ef72-4806-a312-b8cb28b13a77 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/offline-docker-633000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machi
nes/offline-docker-633000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-633000"
	I0725 11:22:28.788282    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:22:28.791252    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 DEBUG: hyperkit: Pid is 6253
	I0725 11:22:28.792055    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 0
	I0725 11:22:28.792066    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:28.792143    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:28.793208    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:28.793293    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:28.793320    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:28.793336    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:28.793343    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:28.793350    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:28.793357    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:28.793365    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:28.793372    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:28.793379    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:28.793387    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:28.793396    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:28.793405    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:28.793411    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:28.793419    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:28.793425    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:28.793431    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:28.793439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:28.793453    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:28.793470    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:28.799700    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:22:28.807797    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/offline-docker-633000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:22:28.808705    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:22:28.808729    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:22:28.808755    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:22:28.808792    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:22:29.186047    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:22:29.186063    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:22:29.300913    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:22:29.300955    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:22:29.300989    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:22:29.301023    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:22:29.301798    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:22:29.301810    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:22:30.794422    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 1
	I0725 11:22:30.794439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:30.794494    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:30.795286    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:30.795350    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:30.795364    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:30.795372    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:30.795382    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:30.795406    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:30.795420    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:30.795439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:30.795450    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:30.795459    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:30.795466    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:30.795474    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:30.795481    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:30.795489    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:30.795495    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:30.795507    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:30.795515    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:30.795524    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:30.795532    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:30.795540    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:32.796373    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 2
	I0725 11:22:32.796388    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:32.796508    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:32.797445    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:32.797494    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:32.797514    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:32.797522    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:32.797530    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:32.797537    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:32.797550    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:32.797562    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:32.797583    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:32.797594    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:32.797600    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:32.797608    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:32.797614    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:32.797623    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:32.797629    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:32.797636    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:32.797644    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:32.797652    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:32.797661    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:32.797667    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:34.674956    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0725 11:22:34.675127    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0725 11:22:34.675138    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0725 11:22:34.695094    5965 main.go:141] libmachine: (offline-docker-633000) DBG | 2024/07/25 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0725 11:22:34.799907    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 3
	I0725 11:22:34.799931    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:34.800140    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:34.801573    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:34.801694    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:34.801714    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:34.801732    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:34.801744    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:34.801768    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:34.801781    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:34.801794    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:34.801814    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:34.801859    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:34.801877    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:34.801897    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:34.801914    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:34.801927    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:34.801956    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:34.801977    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:34.801997    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:34.802018    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:34.802034    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:34.802048    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:36.802251    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 4
	I0725 11:22:36.802266    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:36.802376    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:36.803167    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:36.803224    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:36.803234    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:36.803243    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:36.803252    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:36.803262    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:36.803272    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:36.803279    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:36.803287    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:36.803293    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:36.803327    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:36.803341    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:36.803349    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:36.803357    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:36.803364    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:36.803372    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:36.803382    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:36.803395    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:36.803409    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:36.803423    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:38.805422    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 5
	I0725 11:22:38.805443    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:38.805519    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:38.806355    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:38.806397    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:38.806409    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:38.806436    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:38.806453    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:38.806467    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:38.806477    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:38.806484    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:38.806494    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:38.806501    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:38.806508    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:38.806529    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:38.806537    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:38.806544    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:38.806550    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:38.806562    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:38.806574    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:38.806588    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:38.806597    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:38.806612    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:40.808650    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 6
	I0725 11:22:40.808666    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:40.808701    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:40.809476    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:40.809529    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:40.809542    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:40.809570    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:40.809598    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:40.809613    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:40.809622    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:40.809630    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:40.809638    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:40.809644    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:40.809650    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:40.809657    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:40.809662    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:40.809676    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:40.809684    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:40.809691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:40.809699    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:40.809706    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:40.809714    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:40.809722    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:42.810585    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 7
	I0725 11:22:42.810598    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:42.810670    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:42.811470    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:42.811516    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:42.811529    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:42.811539    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:42.811545    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:42.811552    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:42.811558    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:42.811583    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:42.811592    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:42.811599    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:42.811606    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:42.811623    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:42.811635    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:42.811650    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:42.811662    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:42.811671    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:42.811677    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:42.811684    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:42.811691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:42.811700    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:44.813877    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 8
	I0725 11:22:44.813892    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:44.813952    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:44.814767    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:44.814799    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:44.814810    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:44.814820    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:44.814829    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:44.814843    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:44.814855    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:44.814866    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:44.814876    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:44.814883    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:44.814891    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:44.814900    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:44.814909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:44.814916    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:44.814924    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:44.814941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:44.814952    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:44.814962    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:44.814968    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:44.814986    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:46.815273    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 9
	I0725 11:22:46.815285    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:46.815412    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:46.816220    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:46.816268    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:46.816288    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:46.816299    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:46.816307    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:46.816313    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:46.816322    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:46.816336    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:46.816356    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:46.816365    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:46.816372    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:46.816379    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:46.816387    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:46.816394    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:46.816402    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:46.816415    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:46.816425    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:46.816432    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:46.816439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:46.816445    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:48.817874    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 10
	I0725 11:22:48.817888    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:48.817948    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:48.818705    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:48.818760    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:48.818768    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:48.818775    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:48.818781    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:48.818795    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:48.818808    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:48.818825    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:48.818834    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:48.818840    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:48.818848    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:48.818855    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:48.818862    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:48.818868    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:48.818874    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:48.818885    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:48.818891    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:48.818897    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:48.818905    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:48.818914    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:50.820250    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 11
	I0725 11:22:50.820278    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:50.820314    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:50.821181    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:50.821222    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:50.821232    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:50.821252    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:50.821263    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:50.821270    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:50.821277    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:50.821292    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:50.821298    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:50.821311    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:50.821319    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:50.821326    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:50.821334    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:50.821342    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:50.821350    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:50.821356    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:50.821366    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:50.821374    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:50.821381    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:50.821390    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:52.821435    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 12
	I0725 11:22:52.821459    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:52.821540    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:52.822387    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:52.822400    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:52.822407    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:52.822414    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:52.822420    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:52.822427    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:52.822434    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:52.822440    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:52.822446    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:52.822452    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:52.822461    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:52.822467    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:52.822474    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:52.822482    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:52.822493    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:52.822500    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:52.822506    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:52.822514    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:52.822521    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:52.822528    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:54.824616    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 13
	I0725 11:22:54.824632    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:54.824642    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:54.825453    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:54.825499    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:54.825514    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:54.825526    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:54.825545    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:54.825555    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:54.825571    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:54.825584    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:54.825598    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:54.825608    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:54.825615    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:54.825623    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:54.825640    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:54.825652    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:54.825668    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:54.825682    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:54.825690    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:54.825696    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:54.825710    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:54.825721    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:56.827723    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 14
	I0725 11:22:56.827738    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:56.827800    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:56.828595    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:56.828655    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:56.828669    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:56.828680    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:56.828687    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:56.828694    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:56.828700    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:56.828707    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:56.828713    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:56.828720    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:56.828725    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:56.828740    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:56.828758    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:56.828774    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:56.828786    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:56.828798    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:56.828812    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:56.828820    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:56.828827    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:56.828836    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:58.830878    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 15
	I0725 11:22:58.830895    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:58.831012    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:22:58.831766    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:22:58.831817    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:58.831828    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:58.831837    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:58.831844    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:58.831852    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:58.831858    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:58.831864    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:58.831872    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:58.831879    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:58.831887    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:58.831909    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:58.831922    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:58.831938    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:58.831950    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:58.831970    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:58.831982    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:58.831990    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:58.831998    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:58.832007    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:00.833518    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 16
	I0725 11:23:00.833532    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:00.833618    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:00.834433    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:00.834489    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:00.834502    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:00.834524    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:00.834531    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:00.834540    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:00.834552    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:00.834560    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:00.834572    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:00.834581    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:00.834587    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:00.834610    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:00.834619    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:00.834628    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:00.834637    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:00.834654    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:00.834665    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:00.834674    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:00.834685    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:00.834696    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:02.834703    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 17
	I0725 11:23:02.834717    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:02.834816    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:02.835597    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:02.835642    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:02.835654    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:02.835669    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:02.835678    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:02.835686    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:02.835705    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:02.835718    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:02.835727    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:02.835733    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:02.835740    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:02.835748    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:02.835760    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:02.835767    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:02.835782    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:02.835794    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:02.835803    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:02.835811    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:02.835818    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:02.835824    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:04.837846    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 18
	I0725 11:23:04.837858    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:04.837950    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:04.838689    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:04.838731    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:04.838739    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:04.838751    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:04.838758    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:04.838769    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:04.838780    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:04.838792    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:04.838802    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:04.838810    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:04.838818    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:04.838826    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:04.838832    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:04.838849    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:04.838857    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:04.838864    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:04.838871    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:04.838878    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:04.838885    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:04.838898    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:06.840938    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 19
	I0725 11:23:06.840954    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:06.841013    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:06.841815    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:06.841861    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:06.841873    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:06.841882    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:06.841889    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:06.841900    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:06.841912    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:06.841922    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:06.841931    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:06.841941    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:06.841950    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:06.841957    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:06.841965    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:06.841971    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:06.841978    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:06.841984    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:06.841992    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:06.841999    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:06.842007    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:06.842014    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:08.842691    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 20
	I0725 11:23:08.842707    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:08.842756    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:08.843630    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:08.843674    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:08.843685    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:08.843693    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:08.843698    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:08.843737    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:08.843755    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:08.843764    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:08.843773    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:08.843783    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:08.843794    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:08.843802    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:08.843810    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:08.843816    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:08.843823    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:08.843834    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:08.843842    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:08.843855    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:08.843869    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:08.843878    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:10.844886    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 21
	I0725 11:23:10.844902    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:10.844967    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:10.845741    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:10.845791    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:10.845800    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:10.845814    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:10.845822    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:10.845829    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:10.845851    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:10.845866    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:10.845879    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:10.845903    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:10.845916    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:10.845942    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:10.845960    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:10.845971    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:10.845979    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:10.845992    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:10.846001    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:10.846009    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:10.846016    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:10.846025    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:12.846015    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 22
	I0725 11:23:12.846028    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:12.846077    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:12.846905    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:12.846949    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:12.846963    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:12.846974    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:12.846984    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:12.846992    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:12.846999    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:12.847006    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:12.847014    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:12.847029    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:12.847043    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:12.847051    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:12.847059    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:12.847076    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:12.847084    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:12.847091    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:12.847099    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:12.847105    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:12.847113    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:12.847122    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:14.848626    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 23
	I0725 11:23:14.848643    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:14.848653    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:14.849457    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:14.849469    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:14.849476    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:14.849490    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:14.849496    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:14.849504    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:14.849512    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:14.849529    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:14.849552    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:14.849567    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:14.849579    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:14.849588    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:14.849596    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:14.849603    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:14.849613    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:14.849621    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:14.849630    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:14.849637    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:14.849643    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:14.849657    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:16.851670    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 24
	I0725 11:23:16.851685    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:16.851748    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:16.852596    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:16.852647    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:16.852661    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:16.852676    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:16.852688    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:16.852700    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:16.852706    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:16.852714    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:16.852722    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:16.852728    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:16.852770    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:16.852808    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:16.852838    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:16.852848    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:16.852856    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:16.852863    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:16.852870    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:16.852894    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:16.852913    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:16.852922    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:18.854974    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 25
	I0725 11:23:18.854990    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:18.855018    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:18.856186    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:18.856239    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:18.856247    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:18.856256    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:18.856264    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:18.856294    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:18.856311    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:18.856320    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:18.856328    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:18.856354    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:18.856365    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:18.856373    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:18.856395    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:18.856409    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:18.856420    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:18.856428    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:18.856435    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:18.856443    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:18.856451    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:18.856466    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:20.856593    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 26
	I0725 11:23:20.856608    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:20.856633    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:20.857474    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:20.857487    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:20.857499    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:20.857507    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:20.857513    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:20.857519    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:20.857525    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:20.857532    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:20.857538    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:20.857551    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:20.857562    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:20.857570    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:20.857578    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:20.857586    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:20.857594    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:20.857601    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:20.857616    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:20.857631    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:20.857644    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:20.857654    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:22.858925    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 27
	I0725 11:23:22.858939    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:22.859028    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:22.859821    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:22.859861    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:22.859883    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:22.859902    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:22.859929    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:22.859943    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:22.859953    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:22.859968    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:22.859978    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:22.859987    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:22.859993    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:22.859999    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:22.860007    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:22.860022    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:22.860031    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:22.860038    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:22.860046    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:22.860053    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:22.860075    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:22.860089    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:24.862230    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 28
	I0725 11:23:24.862245    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:24.862385    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:24.863157    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:24.863210    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:24.863220    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:24.863230    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:24.863248    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:24.863255    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:24.863262    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:24.863270    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:24.863277    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:24.863283    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:24.863289    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:24.863297    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:24.863305    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:24.863312    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:24.863319    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:24.863335    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:24.863344    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:24.863350    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:24.863358    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:24.863367    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:26.864494    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Attempt 29
	I0725 11:23:26.864512    5965 main.go:141] libmachine: (offline-docker-633000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:26.864522    5965 main.go:141] libmachine: (offline-docker-633000) DBG | hyperkit pid from json: 6253
	I0725 11:23:26.865299    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Searching for b6:df:3c:e2:20:bd in /var/db/dhcpd_leases ...
	I0725 11:23:26.865359    5965 main.go:141] libmachine: (offline-docker-633000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:26.865368    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:26.865377    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:26.865391    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:26.865399    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:26.865404    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:26.865417    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:26.865429    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:26.865439    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:26.865449    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:26.865459    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:26.865467    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:26.865501    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:26.865511    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:26.865519    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:26.865527    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:26.865534    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:26.865542    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:26.865560    5965 main.go:141] libmachine: (offline-docker-633000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:28.867672    5965 client.go:171] duration metric: took 1m0.950562767s to LocalClient.Create
	I0725 11:23:30.867886    5965 start.go:128] duration metric: took 1m2.982381707s to createHost
	I0725 11:23:30.867902    5965 start.go:83] releasing machines lock for "offline-docker-633000", held for 1m2.982470924s
	W0725 11:23:30.868028    5965 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-633000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:df:3c:e2:20:bd
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-633000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:df:3c:e2:20:bd
	I0725 11:23:30.931271    5965 out.go:177] 
	W0725 11:23:30.952298    5965 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:df:3c:e2:20:bd
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:df:3c:e2:20:bd
	W0725 11:23:30.952318    5965 out.go:239] * 
	* 
	W0725 11:23:30.952916    5965 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:23:31.015229    5965 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-633000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-25 11:23:31.126781 -0700 PDT m=+3328.179666627
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-633000 -n offline-docker-633000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-633000 -n offline-docker-633000: exit status 7 (85.688831ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:23:31.210514    6298 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:23:31.210536    6298 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-633000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-633000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-633000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-633000: (5.243586282s)
--- FAIL: TestOffline (195.37s)

                                                
                                    
x
+
TestCertOptions (251.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-841000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0725 11:30:04.835362    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:30:32.534784    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:31:24.654659    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-841000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m5.94248179s)

                                                
                                                
-- stdout --
	* [cert-options-841000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-841000" primary control-plane node in "cert-options-841000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-841000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:ba:70:3d:36:79
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-841000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:4b:5c:f8:66:dd
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:4b:5c:f8:66:dd
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-841000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-841000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-841000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (160.222059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-841000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-841000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-841000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-841000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-841000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (161.675413ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-841000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-841000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-841000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-25 11:32:57.93694 -0700 PDT m=+3894.979424304
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-841000 -n cert-options-841000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-841000 -n cert-options-841000: exit status 7 (76.208134ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:32:58.011478    7042 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:32:58.011504    7042 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-841000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-841000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-841000: (5.23715477s)
--- FAIL: TestCertOptions (251.62s)

                                                
                                    
x
+
TestCertExpiration (1718.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0725 11:27:48.690499    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.173600197s)

                                                
                                                
-- stdout --
	* [cert-expiration-055000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-055000" primary control-plane node in "cert-expiration-055000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-055000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:44:8e:fe:fa:f9
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-055000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:e6:9e:35:5:22
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:e6:9e:35:5:22
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0725 11:32:20.248203    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0725 11:35:04.846880    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m26.506234404s)

                                                
                                                
-- stdout --
	* [cert-expiration-055000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-055000" primary control-plane node in "cert-expiration-055000" cluster
	* Updating the running hyperkit "cert-expiration-055000" VM ...
	* Updating the running hyperkit "cert-expiration-055000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-055000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-055000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-055000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-055000" primary control-plane node in "cert-expiration-055000" cluster
	* Updating the running hyperkit "cert-expiration-055000" VM ...
	* Updating the running hyperkit "cert-expiration-055000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-055000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-25 11:56:21.04303 -0700 PDT m=+5298.154018265
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-055000 -n cert-expiration-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-055000 -n cert-expiration-055000: exit status 7 (79.721416ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:56:21.121025    8266 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:56:21.121060    8266 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-055000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-055000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-055000
E0725 11:56:24.585961    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-055000: (5.24475251s)
--- FAIL: TestCertExpiration (1718.01s)

                                                
                                    
x
+
TestDockerFlags (252.01s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-087000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0725 11:25:04.831215    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:04.837624    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:04.848019    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:04.869090    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:04.911155    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:04.993237    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:05.155343    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:05.477407    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:06.118273    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:07.399267    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:09.961461    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:15.083647    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:25.326003    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:25:45.806815    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:26:24.647773    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:26:26.768799    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:27:03.298502    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:27:20.242669    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-087000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.272467151s)

                                                
                                                
-- stdout --
	* [docker-flags-087000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-087000" primary control-plane node in "docker-flags-087000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-087000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:24:39.662943    6377 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:24:39.663118    6377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:24:39.663123    6377 out.go:304] Setting ErrFile to fd 2...
	I0725 11:24:39.663127    6377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:24:39.663286    6377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:24:39.664823    6377 out.go:298] Setting JSON to false
	I0725 11:24:39.687464    6377 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5049,"bootTime":1721926830,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 11:24:39.687557    6377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:24:39.709086    6377 out.go:177] * [docker-flags-087000] minikube v1.33.1 on Darwin 14.5
	I0725 11:24:39.751885    6377 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:24:39.751926    6377 notify.go:220] Checking for updates...
	I0725 11:24:39.793623    6377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 11:24:39.814780    6377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 11:24:39.856677    6377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:24:39.898649    6377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:24:39.919720    6377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:24:39.941221    6377 config.go:182] Loaded profile config "force-systemd-flag-521000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:24:39.941312    6377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:24:39.969820    6377 out.go:177] * Using the hyperkit driver based on user configuration
	I0725 11:24:40.011758    6377 start.go:297] selected driver: hyperkit
	I0725 11:24:40.011772    6377 start.go:901] validating driver "hyperkit" against <nil>
	I0725 11:24:40.011783    6377 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:24:40.014870    6377 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:24:40.014980    6377 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 11:24:40.023447    6377 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 11:24:40.027433    6377 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:24:40.027454    6377 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 11:24:40.027486    6377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:24:40.027685    6377 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0725 11:24:40.027741    6377 cni.go:84] Creating CNI manager for ""
	I0725 11:24:40.027758    6377 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:24:40.027763    6377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:24:40.027833    6377 start.go:340] cluster config:
	{Name:docker-flags-087000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:24:40.027919    6377 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:24:40.069700    6377 out.go:177] * Starting "docker-flags-087000" primary control-plane node in "docker-flags-087000" cluster
	I0725 11:24:40.090719    6377 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:24:40.090755    6377 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 11:24:40.090769    6377 cache.go:56] Caching tarball of preloaded images
	I0725 11:24:40.090895    6377 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 11:24:40.090905    6377 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:24:40.090985    6377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/docker-flags-087000/config.json ...
	I0725 11:24:40.091002    6377 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/docker-flags-087000/config.json: {Name:mk4b5afd009931dc572a8b504b49442d8cf6509c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:24:40.091312    6377 start.go:360] acquireMachinesLock for docker-flags-087000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:25:36.963827    6377 start.go:364] duration metric: took 56.871450917s to acquireMachinesLock for "docker-flags-087000"
	I0725 11:25:36.963860    6377 start.go:93] Provisioning new machine with config: &{Name:docker-flags-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:25:36.963924    6377 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:25:36.985614    6377 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:25:36.985761    6377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:25:36.985809    6377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:25:36.994260    6377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53879
	I0725 11:25:36.994604    6377 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:25:36.994996    6377 main.go:141] libmachine: Using API Version  1
	I0725 11:25:36.995007    6377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:25:36.995231    6377 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:25:36.995356    6377 main.go:141] libmachine: (docker-flags-087000) Calling .GetMachineName
	I0725 11:25:36.995471    6377 main.go:141] libmachine: (docker-flags-087000) Calling .DriverName
	I0725 11:25:36.995573    6377 start.go:159] libmachine.API.Create for "docker-flags-087000" (driver="hyperkit")
	I0725 11:25:36.995597    6377 client.go:168] LocalClient.Create starting
	I0725 11:25:36.995629    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:25:36.995681    6377 main.go:141] libmachine: Decoding PEM data...
	I0725 11:25:36.995702    6377 main.go:141] libmachine: Parsing certificate...
	I0725 11:25:36.995772    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:25:36.995812    6377 main.go:141] libmachine: Decoding PEM data...
	I0725 11:25:36.995823    6377 main.go:141] libmachine: Parsing certificate...
	I0725 11:25:36.995841    6377 main.go:141] libmachine: Running pre-create checks...
	I0725 11:25:36.995848    6377 main.go:141] libmachine: (docker-flags-087000) Calling .PreCreateCheck
	I0725 11:25:36.995922    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:36.996125    6377 main.go:141] libmachine: (docker-flags-087000) Calling .GetConfigRaw
	I0725 11:25:37.049152    6377 main.go:141] libmachine: Creating machine...
	I0725 11:25:37.049161    6377 main.go:141] libmachine: (docker-flags-087000) Calling .Create
	I0725 11:25:37.049254    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.049433    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:25:37.049257    6429 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:25:37.049469    6377 main.go:141] libmachine: (docker-flags-087000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:25:37.227825    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:25:37.227759    6429 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/id_rsa...
	I0725 11:25:37.300943    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:25:37.300877    6429 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk...
	I0725 11:25:37.300953    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Writing magic tar header
	I0725 11:25:37.300961    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Writing SSH key tar header
	I0725 11:25:37.301530    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:25:37.301482    6429 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000 ...
	I0725 11:25:37.678832    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.678855    6377 main.go:141] libmachine: (docker-flags-087000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid
	I0725 11:25:37.678890    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Using UUID 528b0acb-21a7-4374-9685-cf1444d54638
	I0725 11:25:37.702988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Generated MAC 2a:0:cb:21:e8:e5
	I0725 11:25:37.703004    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000
	I0725 11:25:37.703056    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528b0acb-21a7-4374-9685-cf1444d54638", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0725 11:25:37.703090    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528b0acb-21a7-4374-9685-cf1444d54638", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0725 11:25:37.703165    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "528b0acb-21a7-4374-9685-cf1444d54638", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage,/Users/jenkins/m
inikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000"}
	I0725 11:25:37.703225    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 528b0acb-21a7-4374-9685-cf1444d54638 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags
-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000"
	I0725 11:25:37.703249    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:25:37.706215    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 DEBUG: hyperkit: Pid is 6432
	I0725 11:25:37.707550    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 0
	I0725 11:25:37.707563    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.707689    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:37.708798    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:37.708873    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:37.708889    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:37.708919    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:37.708938    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:37.708952    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:37.708966    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:37.708989    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:37.709006    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:37.709027    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:37.709036    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:37.709044    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:37.709055    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:37.709062    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:37.709073    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:37.709081    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:37.709086    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:37.709105    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:37.709121    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:37.709131    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:37.714313    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:25:37.722325    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:25:37.723167    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:25:37.723219    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:25:37.723281    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:25:37.723297    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:25:38.099218    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:25:38.099233    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:25:38.213877    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:25:38.213900    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:25:38.213914    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:25:38.213923    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:25:38.214762    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:25:38.214772    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:25:39.709565    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 1
	I0725 11:25:39.709582    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:39.709643    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:39.710455    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:39.710501    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:39.710517    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:39.710533    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:39.710542    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:39.710551    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:39.710559    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:39.710574    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:39.710582    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:39.710590    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:39.710597    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:39.710604    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:39.710610    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:39.710630    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:39.710645    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:39.710657    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:39.710665    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:39.710673    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:39.710682    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:39.710695    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:41.712094    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 2
	I0725 11:25:41.712112    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:41.712204    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:41.713023    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:41.713083    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:41.713095    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:41.713104    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:41.713110    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:41.713118    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:41.713126    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:41.713132    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:41.713140    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:41.713146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:41.713151    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:41.713157    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:41.713166    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:41.713174    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:41.713182    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:41.713189    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:41.713197    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:41.713211    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:41.713221    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:41.713230    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:43.598978    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 11:25:43.599070    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 11:25:43.599080    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 11:25:43.618988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:25:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 11:25:43.713893    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 3
	I0725 11:25:43.713917    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:43.714151    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:43.715576    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:43.715713    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:43.715732    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:43.715746    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:43.715759    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:43.715787    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:43.715812    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:43.715829    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:43.715844    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:43.715873    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:43.715899    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:43.715930    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:43.715949    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:43.715977    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:43.716000    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:43.716014    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:43.716028    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:43.716041    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:43.716053    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:43.716079    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:45.717177    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 4
	I0725 11:25:45.717191    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:45.717261    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:45.718084    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:45.718148    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:45.718163    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:45.718172    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:45.718178    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:45.718184    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:45.718202    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:45.718211    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:45.718220    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:45.718228    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:45.718244    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:45.718256    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:45.718265    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:45.718274    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:45.718283    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:45.718292    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:45.718307    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:45.718321    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:45.718329    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:45.718335    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:47.718394    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 5
	I0725 11:25:47.718409    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:47.718452    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:47.719312    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:47.719364    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:47.719377    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:47.719399    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:47.719406    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:47.719413    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:47.719427    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:47.719435    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:47.719443    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:47.719450    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:47.719460    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:47.719473    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:47.719481    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:47.719495    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:47.719506    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:47.719513    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:47.719526    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:47.719533    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:47.719541    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:47.719550    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:49.721365    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 6
	I0725 11:25:49.721380    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:49.721445    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:49.722236    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:49.722280    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:49.722297    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:49.722320    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:49.722334    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:49.722346    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:49.722356    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:49.722364    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:49.722370    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:49.722379    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:49.722398    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:49.722410    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:49.722418    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:49.722427    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:49.722436    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:49.722444    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:49.722452    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:49.722457    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:49.722463    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:49.722471    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:51.722847    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 7
	I0725 11:25:51.722863    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:51.722906    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:51.723680    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:51.723701    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:51.723728    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:51.723738    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:51.723746    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:51.723757    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:51.723777    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:51.723793    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:51.723805    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:51.723813    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:51.723821    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:51.723828    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:51.723836    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:51.723843    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:51.723851    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:51.723859    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:51.723867    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:51.723874    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:51.723881    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:51.723893    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:53.725972    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 8
	I0725 11:25:53.725988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:53.726069    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:53.726899    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:53.726931    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:53.726938    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:53.726951    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:53.726957    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:53.726968    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:53.726974    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:53.726981    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:53.726987    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:53.726995    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:53.727003    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:53.727011    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:53.727017    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:53.727023    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:53.727030    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:53.727036    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:53.727044    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:53.727060    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:53.727072    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:53.727095    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:55.727958    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 9
	I0725 11:25:55.727973    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:55.728035    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:55.728851    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:55.728885    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:55.728892    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:55.728913    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:55.728928    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:55.728944    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:55.728957    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:55.728970    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:55.728979    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:55.728992    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:55.729001    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:55.729015    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:55.729039    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:55.729048    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:55.729057    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:55.729071    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:55.729084    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:55.729092    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:55.729099    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:55.729107    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:57.730596    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 10
	I0725 11:25:57.730616    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:57.730738    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:57.731525    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:57.731584    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:57.731592    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:57.731601    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:57.731609    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:57.731622    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:57.731637    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:57.731645    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:57.731653    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:57.731667    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:57.731681    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:57.731689    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:57.731698    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:57.731705    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:57.731712    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:57.731719    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:57.731725    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:57.731735    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:57.731743    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:57.731759    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:59.731802    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 11
	I0725 11:25:59.731816    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:59.731899    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:25:59.732686    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:25:59.732718    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:59.732743    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:59.732760    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:59.732770    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:59.732777    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:59.732786    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:59.732800    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:59.732809    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:59.732816    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:59.732823    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:59.732837    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:59.732850    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:59.732860    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:59.732869    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:59.732878    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:59.732888    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:59.732896    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:59.732903    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:59.732911    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:01.734980    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 12
	I0725 11:26:01.734994    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:01.735061    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:01.736013    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:01.736065    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:01.736075    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:01.736087    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:01.736097    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:01.736125    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:01.736146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:01.736159    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:01.736170    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:01.736185    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:01.736197    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:01.736205    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:01.736211    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:01.736232    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:01.736240    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:01.736246    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:01.736258    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:01.736267    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:01.736276    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:01.736291    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:03.737663    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 13
	I0725 11:26:03.737678    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:03.737778    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:03.738686    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:03.738746    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:03.738757    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:03.738770    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:03.738778    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:03.738786    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:03.738793    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:03.738800    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:03.738807    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:03.738815    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:03.738823    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:03.738829    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:03.738836    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:03.738843    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:03.738851    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:03.738868    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:03.738879    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:03.738888    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:03.738895    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:03.738903    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:05.738947    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 14
	I0725 11:26:05.738962    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:05.739007    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:05.739782    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:05.739834    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:05.739844    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:05.739853    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:05.739859    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:05.739876    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:05.739888    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:05.739896    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:05.739906    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:05.739913    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:05.739921    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:05.739930    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:05.739937    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:05.739944    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:05.739952    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:05.739958    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:05.739969    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:05.739991    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:05.740004    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:05.740020    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:07.741467    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 15
	I0725 11:26:07.741484    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:07.741599    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:07.742436    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:07.742496    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:07.742508    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:07.742517    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:07.742524    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:07.742530    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:07.742536    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:07.742542    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:07.742548    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:07.742555    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:07.742562    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:07.742569    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:07.742576    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:07.742586    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:07.742602    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:07.742616    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:07.742628    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:07.742635    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:07.742642    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:07.742647    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:09.743067    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 16
	I0725 11:26:09.743083    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:09.743138    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:09.743902    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:09.743963    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:09.743974    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:09.743983    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:09.743997    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:09.744008    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:09.744016    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:09.744024    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:09.744040    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:09.744055    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:09.744067    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:09.744077    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:09.744086    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:09.744101    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:09.744110    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:09.744117    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:09.744125    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:09.744132    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:09.744140    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:09.744149    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:11.744463    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 17
	I0725 11:26:11.744479    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:11.744561    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:11.745361    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:11.745403    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:11.745414    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:11.745423    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:11.745429    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:11.745436    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:11.745442    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:11.745464    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:11.745479    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:11.745487    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:11.745495    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:11.745508    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:11.745518    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:11.745527    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:11.745536    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:11.745555    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:11.745567    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:11.745580    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:11.745587    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:11.745596    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:13.747588    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 18
	I0725 11:26:13.747600    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:13.747678    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:13.748464    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:13.748526    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:13.748537    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:13.748561    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:13.748573    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:13.748592    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:13.748604    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:13.748619    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:13.748628    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:13.748635    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:13.748642    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:13.748648    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:13.748661    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:13.748676    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:13.748684    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:13.748690    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:13.748697    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:13.748705    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:13.748711    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:13.748717    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:15.748725    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 19
	I0725 11:26:15.748738    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:15.748775    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:15.749553    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:15.749583    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:15.749592    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:15.749601    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:15.749607    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:15.749616    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:15.749627    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:15.749635    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:15.749642    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:15.749648    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:15.749674    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:15.749689    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:15.749699    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:15.749707    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:15.749715    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:15.749722    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:15.749768    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:15.749784    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:15.749796    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:15.749804    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:17.750893    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 20
	I0725 11:26:17.750908    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:17.751035    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:17.751827    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:17.751888    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:17.751901    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:17.751916    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:17.751927    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:17.751937    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:17.751944    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:17.751964    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:17.751974    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:17.751983    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:17.751990    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:17.751998    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:17.752006    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:17.752013    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:17.752020    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:17.752035    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:17.752048    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:17.752063    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:17.752073    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:17.752082    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:19.754063    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 21
	I0725 11:26:19.754077    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:19.754156    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:19.754991    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:19.755040    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:19.755069    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:19.755078    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:19.755089    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:19.755098    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:19.755107    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:19.755113    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:19.755127    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:19.755144    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:19.755153    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:19.755161    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:19.755168    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:19.755176    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:19.755183    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:19.755190    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:19.755196    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:19.755203    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:19.755209    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:19.755217    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:21.755496    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 22
	I0725 11:26:21.755511    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:21.755582    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:21.756526    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:21.756581    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:21.756596    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:21.756613    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:21.756625    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:21.756632    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:21.756638    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:21.756652    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:21.756666    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:21.756673    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:21.756682    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:21.756689    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:21.756697    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:21.756704    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:21.756713    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:21.756720    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:21.756728    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:21.756739    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:21.756748    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:21.756756    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:23.758822    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 23
	I0725 11:26:23.758837    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:23.758892    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:23.759664    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:23.759705    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:23.759712    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:23.759734    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:23.759747    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:23.759755    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:23.759760    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:23.759768    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:23.759778    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:23.759787    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:23.759800    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:23.759813    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:23.759834    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:23.759844    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:23.759852    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:23.759860    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:23.759874    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:23.759886    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:23.759896    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:23.759904    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:25.760759    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 24
	I0725 11:26:25.760776    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:25.760856    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:25.761670    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:25.761693    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:25.761713    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:25.761739    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:25.761750    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:25.761762    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:25.761769    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:25.761775    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:25.761784    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:25.761791    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:25.761798    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:25.761812    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:25.761825    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:25.761837    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:25.761845    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:25.761852    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:25.761860    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:25.761870    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:25.761879    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:25.761892    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:27.761925    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 25
	I0725 11:26:27.761940    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:27.761997    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:27.762780    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:27.762822    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:27.762836    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:27.762867    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:27.762877    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:27.762885    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:27.762894    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:27.762904    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:27.762916    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:27.762923    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:27.762928    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:27.762957    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:27.762970    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:27.762984    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:27.762994    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:27.763009    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:27.763021    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:27.763038    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:27.763046    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:27.763056    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:29.763951    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 26
	I0725 11:26:29.763966    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:29.764046    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:29.764828    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:29.764893    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:29.764908    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:29.764924    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:29.764933    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:29.764940    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:29.764945    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:29.764951    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:29.764958    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:29.764966    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:29.764990    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:29.765004    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:29.765011    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:29.765019    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:29.765033    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:29.765044    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:29.765054    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:29.765072    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:29.765081    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:29.765089    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:31.767037    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 27
	I0725 11:26:31.767051    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:31.767091    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:31.768014    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:31.768023    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:31.768031    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:31.768038    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:31.768056    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:31.768068    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:31.768075    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:31.768081    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:31.768090    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:31.768099    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:31.768116    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:31.768125    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:31.768132    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:31.768140    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:31.768146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:31.768154    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:31.768160    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:31.768168    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:31.768176    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:31.768184    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:33.768726    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 28
	I0725 11:26:33.768746    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:33.768854    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:33.769906    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:33.769965    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:33.769978    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:33.769989    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:33.769997    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:33.770006    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:33.770013    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:33.770041    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:33.770054    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:33.770063    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:33.770069    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:33.770075    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:33.770081    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:33.770087    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:33.770094    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:33.770100    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:33.770108    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:33.770115    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:33.770131    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:33.770148    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:35.771268    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 29
	I0725 11:26:35.771292    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:35.771367    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:35.772191    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 2a:0:cb:21:e8:e5 in /var/db/dhcpd_leases ...
	I0725 11:26:35.772243    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:35.772256    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:35.772264    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:35.772273    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:35.772281    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:35.772287    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:35.772294    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:35.772300    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:35.772308    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:35.772320    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:35.772328    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:35.772336    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:35.772351    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:35.772363    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:35.772385    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:35.772393    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:35.772428    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:35.772435    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:35.772466    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:37.773189    6377 client.go:171] duration metric: took 1m0.776462661s to LocalClient.Create
	I0725 11:26:39.775304    6377 start.go:128] duration metric: took 1m2.810212329s to createHost
	I0725 11:26:39.775320    6377 start.go:83] releasing machines lock for "docker-flags-087000", held for 1m2.810330692s
	W0725 11:26:39.775339    6377 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:0:cb:21:e8:e5
	I0725 11:26:39.775661    6377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:26:39.775679    6377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:26:39.784348    6377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53881
	I0725 11:26:39.784675    6377 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:26:39.785048    6377 main.go:141] libmachine: Using API Version  1
	I0725 11:26:39.785064    6377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:26:39.785313    6377 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:26:39.785700    6377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:26:39.785720    6377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:26:39.793989    6377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53883
	I0725 11:26:39.794351    6377 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:26:39.794685    6377 main.go:141] libmachine: Using API Version  1
	I0725 11:26:39.794700    6377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:26:39.794913    6377 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:26:39.795036    6377 main.go:141] libmachine: (docker-flags-087000) Calling .GetState
	I0725 11:26:39.795134    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.795197    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:39.796187    6377 main.go:141] libmachine: (docker-flags-087000) Calling .DriverName
	I0725 11:26:39.859822    6377 out.go:177] * Deleting "docker-flags-087000" in hyperkit ...
	I0725 11:26:39.880785    6377 main.go:141] libmachine: (docker-flags-087000) Calling .Remove
	I0725 11:26:39.880927    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.880941    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.880983    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:39.881925    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.881998    6377 main.go:141] libmachine: (docker-flags-087000) DBG | waiting for graceful shutdown
	I0725 11:26:40.882178    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:40.882282    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:40.883233    6377 main.go:141] libmachine: (docker-flags-087000) DBG | waiting for graceful shutdown
	I0725 11:26:41.884026    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:41.884103    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:41.885767    6377 main.go:141] libmachine: (docker-flags-087000) DBG | waiting for graceful shutdown
	I0725 11:26:42.886426    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:42.886557    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:42.887215    6377 main.go:141] libmachine: (docker-flags-087000) DBG | waiting for graceful shutdown
	I0725 11:26:43.888179    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:43.888261    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:43.888985    6377 main.go:141] libmachine: (docker-flags-087000) DBG | waiting for graceful shutdown
	I0725 11:26:44.890717    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:44.890795    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6432
	I0725 11:26:44.891734    6377 main.go:141] libmachine: (docker-flags-087000) DBG | sending sigkill
	I0725 11:26:44.891745    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:44.902943    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:26:44 WARN : hyperkit: failed to read stdout: EOF
	I0725 11:26:44.902961    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:26:44 WARN : hyperkit: failed to read stderr: EOF
	W0725 11:26:44.918601    6377 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:0:cb:21:e8:e5
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:0:cb:21:e8:e5
	I0725 11:26:44.918618    6377 start.go:729] Will try again in 5 seconds ...
	I0725 11:26:49.918886    6377 start.go:360] acquireMachinesLock for docker-flags-087000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:27:42.638324    6377 start.go:364] duration metric: took 52.718437005s to acquireMachinesLock for "docker-flags-087000"
	I0725 11:27:42.638372    6377 start.go:93] Provisioning new machine with config: &{Name:docker-flags-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:27:42.638440    6377 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:27:42.680442    6377 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:27:42.680518    6377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:27:42.680548    6377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:27:42.688989    6377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53887
	I0725 11:27:42.689352    6377 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:27:42.689678    6377 main.go:141] libmachine: Using API Version  1
	I0725 11:27:42.689688    6377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:27:42.689914    6377 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:27:42.690036    6377 main.go:141] libmachine: (docker-flags-087000) Calling .GetMachineName
	I0725 11:27:42.690134    6377 main.go:141] libmachine: (docker-flags-087000) Calling .DriverName
	I0725 11:27:42.690247    6377 start.go:159] libmachine.API.Create for "docker-flags-087000" (driver="hyperkit")
	I0725 11:27:42.690267    6377 client.go:168] LocalClient.Create starting
	I0725 11:27:42.690297    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:27:42.690349    6377 main.go:141] libmachine: Decoding PEM data...
	I0725 11:27:42.690362    6377 main.go:141] libmachine: Parsing certificate...
	I0725 11:27:42.690402    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:27:42.690443    6377 main.go:141] libmachine: Decoding PEM data...
	I0725 11:27:42.690461    6377 main.go:141] libmachine: Parsing certificate...
	I0725 11:27:42.690481    6377 main.go:141] libmachine: Running pre-create checks...
	I0725 11:27:42.690487    6377 main.go:141] libmachine: (docker-flags-087000) Calling .PreCreateCheck
	I0725 11:27:42.690575    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:42.690609    6377 main.go:141] libmachine: (docker-flags-087000) Calling .GetConfigRaw
	I0725 11:27:42.701827    6377 main.go:141] libmachine: Creating machine...
	I0725 11:27:42.701841    6377 main.go:141] libmachine: (docker-flags-087000) Calling .Create
	I0725 11:27:42.701939    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:42.702095    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:27:42.701939    6536 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:27:42.702116    6377 main.go:141] libmachine: (docker-flags-087000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:27:43.141396    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:27:43.141285    6536 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/id_rsa...
	I0725 11:27:43.221080    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:27:43.220987    6536 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk...
	I0725 11:27:43.221096    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Writing magic tar header
	I0725 11:27:43.221107    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Writing SSH key tar header
	I0725 11:27:43.221488    6377 main.go:141] libmachine: (docker-flags-087000) DBG | I0725 11:27:43.221397    6536 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000 ...
	I0725 11:27:43.596560    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:43.596582    6377 main.go:141] libmachine: (docker-flags-087000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid
	I0725 11:27:43.596594    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Using UUID e93e54d8-1e0f-4c5d-ba8e-045dd2097598
	I0725 11:27:43.622401    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Generated MAC 5e:42:83:6a:b4:bb
	I0725 11:27:43.622421    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000
	I0725 11:27:43.622468    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e93e54d8-1e0f-4c5d-ba8e-045dd2097598", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0725 11:27:43.622496    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e93e54d8-1e0f-4c5d-ba8e-045dd2097598", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0725 11:27:43.622538    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e93e54d8-1e0f-4c5d-ba8e-045dd2097598", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage,/Users/jenkins/m
inikube-integration/19326-1195/.minikube/machines/docker-flags-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000"}
	I0725 11:27:43.622581    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e93e54d8-1e0f-4c5d-ba8e-045dd2097598 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/docker-flags-087000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags
-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-087000"
	I0725 11:27:43.622601    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:27:43.625585    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 DEBUG: hyperkit: Pid is 6550
	I0725 11:27:43.626972    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 0
	I0725 11:27:43.626988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:43.627081    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:43.628008    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:43.628085    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:43.628102    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:43.628117    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:43.628128    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:43.628148    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:43.628164    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:43.628178    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:43.628189    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:43.628211    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:43.628222    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:43.628250    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:43.628270    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:43.628279    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:43.628288    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:43.628301    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:43.628314    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:43.628326    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:43.628336    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:43.628350    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:43.633664    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:27:43.641757    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/docker-flags-087000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:27:43.642626    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:27:43.642649    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:27:43.642678    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:27:43.642692    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:27:44.023963    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:27:44.023979    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:27:44.139527    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:27:44.139543    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:27:44.139551    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:27:44.139557    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:27:44.140123    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:27:44.140136    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:27:45.628327    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 1
	I0725 11:27:45.628344    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:45.628442    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:45.629287    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:45.629356    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:45.629366    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:45.629388    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:45.629395    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:45.629403    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:45.629412    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:45.629419    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:45.629425    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:45.629440    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:45.629452    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:45.629460    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:45.629468    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:45.629491    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:45.629504    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:45.629527    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:45.629537    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:45.629547    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:45.629555    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:45.629571    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:47.631576    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 2
	I0725 11:27:47.631595    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:47.631667    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:47.632519    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:47.632533    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:47.632544    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:47.632551    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:47.632558    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:47.632565    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:47.632573    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:47.632583    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:47.632599    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:47.632608    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:47.632617    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:47.632625    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:47.632632    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:47.632639    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:47.632646    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:47.632654    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:47.632676    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:47.632688    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:47.632698    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:47.632709    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:49.515517    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:49 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 11:27:49.515651    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 11:27:49.515665    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 11:27:49.534902    6377 main.go:141] libmachine: (docker-flags-087000) DBG | 2024/07/25 11:27:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 11:27:49.634903    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 3
	I0725 11:27:49.634931    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:49.635116    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:49.636604    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:49.636725    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:49.636751    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:49.636776    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:49.636811    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:49.636821    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:49.636831    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:49.636853    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:49.636869    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:49.636881    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:49.636892    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:49.636913    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:49.636930    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:49.636947    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:49.636959    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:49.636985    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:49.636999    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:49.637021    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:49.637040    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:49.637052    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:51.636984    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 4
	I0725 11:27:51.637000    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:51.637074    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:51.637898    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:51.637944    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:51.637962    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:51.637983    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:51.637994    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:51.638012    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:51.638021    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:51.638034    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:51.638046    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:51.638054    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:51.638063    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:51.638069    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:51.638077    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:51.638084    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:51.638105    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:51.638118    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:51.638127    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:51.638136    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:51.638141    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:51.638159    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:53.639602    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 5
	I0725 11:27:53.639615    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:53.639672    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:53.640476    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:53.640536    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:53.640547    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:53.640556    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:53.640564    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:53.640574    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:53.640585    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:53.640592    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:53.640600    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:53.640606    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:53.640613    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:53.640624    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:53.640639    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:53.640659    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:53.640673    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:53.640695    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:53.640707    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:53.640716    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:53.640723    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:53.640729    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:55.640758    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 6
	I0725 11:27:55.640771    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:55.640857    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:55.641784    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:55.641823    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:55.641839    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:55.641864    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:55.641876    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:55.641884    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:55.641893    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:55.641908    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:55.641919    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:55.641928    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:55.641936    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:55.641943    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:55.641951    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:55.641958    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:55.641964    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:55.641974    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:55.641984    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:55.641991    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:55.641997    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:55.642015    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:57.642329    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 7
	I0725 11:27:57.642344    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:57.642433    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:57.643233    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:57.643276    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:57.643291    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:57.643311    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:57.643324    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:57.643332    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:57.643338    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:57.643374    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:57.643383    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:57.643391    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:57.643399    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:57.643406    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:57.643420    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:57.643429    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:57.643437    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:57.643454    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:57.643468    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:57.643477    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:57.643485    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:57.643495    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:59.644595    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 8
	I0725 11:27:59.644611    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:59.644723    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:27:59.645529    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:27:59.645575    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:59.645587    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:59.645613    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:59.645624    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:59.645633    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:59.645642    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:59.645650    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:59.645659    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:59.645668    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:59.645679    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:59.645691    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:59.645708    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:59.645717    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:59.645724    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:59.645733    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:59.645740    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:59.645749    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:59.645761    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:59.645769    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:01.646780    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 9
	I0725 11:28:01.646796    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:01.646918    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:01.647719    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:01.647758    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:01.647774    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:01.647785    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:01.647796    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:01.647802    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:01.647809    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:01.647819    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:01.647836    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:01.647851    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:01.647860    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:01.647880    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:01.647888    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:01.647901    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:01.647909    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:01.647918    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:01.647929    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:01.647938    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:01.647947    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:01.647953    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:03.648108    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 10
	I0725 11:28:03.648128    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:03.648257    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:03.649076    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:03.649129    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:03.649144    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:03.649160    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:03.649169    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:03.649181    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:03.649189    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:03.649197    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:03.649205    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:03.649211    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:03.649233    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:03.649244    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:03.649254    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:03.649261    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:03.649268    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:03.649275    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:03.649283    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:03.649289    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:03.649297    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:03.649321    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:05.649676    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 11
	I0725 11:28:05.649689    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:05.649745    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:05.650559    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:05.650607    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:05.650621    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:05.650635    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:05.650643    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:05.650652    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:05.650660    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:05.650666    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:05.650672    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:05.650683    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:05.650695    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:05.650702    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:05.650710    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:05.650719    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:05.650726    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:05.650733    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:05.650741    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:05.650757    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:05.650769    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:05.650779    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:07.652856    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 12
	I0725 11:28:07.652872    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:07.652942    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:07.653735    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:07.653786    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:07.653802    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:07.653812    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:07.653818    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:07.653832    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:07.653841    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:07.653851    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:07.653858    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:07.653866    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:07.653871    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:07.653878    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:07.653884    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:07.653891    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:07.653900    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:07.653907    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:07.653914    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:07.653922    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:07.653936    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:07.653950    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:09.654176    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 13
	I0725 11:28:09.654192    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:09.654317    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:09.655134    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:09.655177    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:09.655191    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:09.655206    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:09.655218    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:09.655227    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:09.655233    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:09.655240    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:09.655263    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:09.655275    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:09.655283    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:09.655295    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:09.655303    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:09.655311    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:09.655321    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:09.655330    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:09.655338    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:09.655347    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:09.655355    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:09.655363    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:11.657045    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 14
	I0725 11:28:11.657061    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:11.657114    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:11.657906    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:11.657958    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:11.657970    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:11.657988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:11.658003    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:11.658011    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:11.658020    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:11.658045    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:11.658055    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:11.658062    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:11.658071    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:11.658079    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:11.658087    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:11.658101    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:11.658115    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:11.658124    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:11.658134    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:11.658149    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:11.658167    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:11.658177    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:13.660253    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 15
	I0725 11:28:13.660269    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:13.660315    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:13.661101    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:13.661143    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:13.661153    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:13.661162    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:13.661171    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:13.661180    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:13.661188    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:13.661208    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:13.661221    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:13.661229    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:13.661237    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:13.661251    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:13.661266    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:13.661276    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:13.661285    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:13.661292    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:13.661297    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:13.661316    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:13.661329    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:13.661342    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:15.662073    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 16
	I0725 11:28:15.662096    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:15.662208    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:15.663001    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:15.663037    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:15.663046    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:15.663059    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:15.663066    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:15.663080    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:15.663095    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:15.663103    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:15.663111    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:15.663129    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:15.663139    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:15.663146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:15.663153    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:15.663160    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:15.663168    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:15.663184    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:15.663195    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:15.663203    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:15.663210    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:15.663229    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:17.663937    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 17
	I0725 11:28:17.663952    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:17.664037    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:17.665017    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:17.665065    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:17.665074    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:17.665084    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:17.665094    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:17.665104    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:17.665110    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:17.665117    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:17.665124    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:17.665140    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:17.665148    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:17.665155    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:17.665164    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:17.665170    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:17.665178    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:17.665194    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:17.665208    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:17.665222    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:17.665231    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:17.665241    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:19.667280    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 18
	I0725 11:28:19.667361    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:19.667393    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:19.668210    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:19.668267    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:19.668278    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:19.668285    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:19.668291    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:19.668310    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:19.668325    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:19.668333    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:19.668339    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:19.668347    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:19.668355    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:19.668361    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:19.668376    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:19.668382    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:19.668389    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:19.668397    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:19.668413    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:19.668429    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:19.668445    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:19.668459    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:21.669944    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 19
	I0725 11:28:21.669982    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:21.670047    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:21.670857    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:21.671060    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:21.671077    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:21.671085    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:21.671091    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:21.671097    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:21.671104    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:21.671112    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:21.671119    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:21.671126    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:21.671131    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:21.671146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:21.671158    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:21.671166    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:21.671173    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:21.671179    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:21.671187    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:21.671195    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:21.671202    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:21.671210    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:23.673237    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 20
	I0725 11:28:23.673250    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:23.673305    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:23.674157    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:23.674215    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:23.674229    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:23.674238    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:23.674249    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:23.674258    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:23.674267    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:23.674275    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:23.674283    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:23.674300    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:23.674312    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:23.674321    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:23.674329    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:23.674339    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:23.674350    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:23.674366    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:23.674379    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:23.674387    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:23.674396    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:23.674405    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:25.674540    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 21
	I0725 11:28:25.674555    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:25.674636    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:25.675404    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:25.675466    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:25.675476    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:25.675500    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:25.675508    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:25.675515    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:25.675531    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:25.675539    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:25.675546    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:25.675554    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:25.675561    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:25.675569    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:25.675576    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:25.675586    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:25.675595    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:25.675607    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:25.675615    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:25.675630    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:25.675644    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:25.675654    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:27.676763    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 22
	I0725 11:28:27.676779    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:27.676873    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:27.677720    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:27.677763    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:27.677776    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:27.677791    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:27.677797    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:27.677804    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:27.677809    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:27.677816    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:27.677823    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:27.677829    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:27.677836    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:27.677844    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:27.677852    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:27.677862    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:27.677871    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:27.677879    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:27.677894    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:27.677906    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:27.677918    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:27.677927    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:29.680028    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 23
	I0725 11:28:29.680042    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:29.680138    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:29.680963    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:29.681009    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:29.681018    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:29.681027    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:29.681034    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:29.681050    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:29.681071    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:29.681089    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:29.681101    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:29.681109    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:29.681117    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:29.681125    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:29.681133    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:29.681148    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:29.681161    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:29.681169    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:29.681175    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:29.681190    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:29.681202    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:29.681219    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:31.681543    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 24
	I0725 11:28:31.681556    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:31.681618    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:31.682385    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:31.682439    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:31.682448    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:31.682457    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:31.682464    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:31.682473    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:31.682483    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:31.682489    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:31.682495    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:31.682503    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:31.682513    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:31.682524    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:31.682532    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:31.682538    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:31.682544    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:31.682558    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:31.682571    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:31.682579    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:31.682585    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:31.682595    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:33.684659    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 25
	I0725 11:28:33.684674    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:33.684736    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:33.685615    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:33.685674    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:33.685686    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:33.685696    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:33.685705    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:33.685714    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:33.685721    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:33.685736    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:33.685748    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:33.685756    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:33.685763    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:33.685777    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:33.685789    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:33.685798    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:33.685812    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:33.685822    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:33.685830    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:33.685837    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:33.685845    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:33.685853    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:35.686855    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 26
	I0725 11:28:35.686868    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:35.687015    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:35.687804    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:35.687850    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:35.687874    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:35.687886    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:35.687896    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:35.687901    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:35.687917    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:35.687929    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:35.687937    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:35.687947    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:35.687954    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:35.687961    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:35.687969    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:35.687975    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:35.687988    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:35.688000    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:35.688008    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:35.688021    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:35.688032    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:35.688039    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:37.689834    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 27
	I0725 11:28:37.689847    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:37.689917    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:37.690707    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:37.690763    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:37.690775    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:37.690785    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:37.690795    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:37.690803    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:37.690810    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:37.690815    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:37.690822    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:37.690830    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:37.690840    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:37.690850    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:37.690858    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:37.690866    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:37.690881    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:37.690894    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:37.690903    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:37.690911    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:37.690936    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:37.690949    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:39.691009    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 28
	I0725 11:28:39.691025    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:39.691096    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:39.691867    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:39.691916    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:39.691929    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:39.691941    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:39.691949    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:39.691956    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:39.691963    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:39.691979    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:39.691987    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:39.691994    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:39.692000    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:39.692012    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:39.692028    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:39.692037    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:39.692045    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:39.692053    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:39.692061    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:39.692067    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:39.692073    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:39.692082    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:41.692232    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Attempt 29
	I0725 11:28:41.692254    6377 main.go:141] libmachine: (docker-flags-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:28:41.692294    6377 main.go:141] libmachine: (docker-flags-087000) DBG | hyperkit pid from json: 6550
	I0725 11:28:41.693146    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Searching for 5e:42:83:6a:b4:bb in /var/db/dhcpd_leases ...
	I0725 11:28:41.693202    6377 main.go:141] libmachine: (docker-flags-087000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:28:41.693217    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:28:41.693228    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:28:41.693235    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:28:41.693250    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:28:41.693262    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:28:41.693269    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:28:41.693277    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:28:41.693285    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:28:41.693293    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:28:41.693299    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:28:41.693305    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:28:41.693340    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:28:41.693356    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:28:41.693369    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:28:41.693377    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:28:41.693384    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:28:41.693392    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:28:41.693411    6377 main.go:141] libmachine: (docker-flags-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:28:43.693605    6377 client.go:171] duration metric: took 1m1.002208786s to LocalClient.Create
	I0725 11:28:45.693789    6377 start.go:128] duration metric: took 1m3.054183117s to createHost
	I0725 11:28:45.693807    6377 start.go:83] releasing machines lock for "docker-flags-087000", held for 1m3.05430441s
	W0725 11:28:45.693891    6377 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-087000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:42:83:6a:b4:bb
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-087000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:42:83:6a:b4:bb
	I0725 11:28:45.758539    6377 out.go:177] 
	W0725 11:28:45.779781    6377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:42:83:6a:b4:bb
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:42:83:6a:b4:bb
	W0725 11:28:45.779798    6377 out.go:239] * 
	* 
	W0725 11:28:45.780452    6377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:28:45.842635    6377 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-087000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-087000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-087000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (180.248333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-087000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-087000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-087000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-087000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (170.976619ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-087000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-087000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-087000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-25 11:28:46.306998 -0700 PDT m=+3643.354100294
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-087000 -n docker-flags-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-087000 -n docker-flags-087000: exit status 7 (79.468646ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:28:46.384410    6609 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:28:46.384437    6609 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-087000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-087000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-087000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-087000: (5.238980894s)
--- FAIL: TestDockerFlags (252.01s)

                                                
                                    
x
+
TestForceSystemdFlag (251.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-521000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-521000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.383974588s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-521000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-521000" primary control-plane node in "force-systemd-flag-521000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-521000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:23:36.508361    6316 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:23:36.508649    6316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:23:36.508654    6316 out.go:304] Setting ErrFile to fd 2...
	I0725 11:23:36.508658    6316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:23:36.508837    6316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:23:36.510276    6316 out.go:298] Setting JSON to false
	I0725 11:23:36.533130    6316 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4986,"bootTime":1721926830,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 11:23:36.533224    6316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:23:36.553504    6316 out.go:177] * [force-systemd-flag-521000] minikube v1.33.1 on Darwin 14.5
	I0725 11:23:36.596642    6316 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:23:36.596643    6316 notify.go:220] Checking for updates...
	I0725 11:23:36.639356    6316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 11:23:36.660470    6316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 11:23:36.681527    6316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:23:36.702348    6316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:23:36.723504    6316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:23:36.745080    6316 config.go:182] Loaded profile config "force-systemd-env-531000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:23:36.745188    6316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:23:36.774392    6316 out.go:177] * Using the hyperkit driver based on user configuration
	I0725 11:23:36.816528    6316 start.go:297] selected driver: hyperkit
	I0725 11:23:36.816545    6316 start.go:901] validating driver "hyperkit" against <nil>
	I0725 11:23:36.816560    6316 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:23:36.819481    6316 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:23:36.819594    6316 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 11:23:36.827893    6316 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 11:23:36.832575    6316 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:23:36.832597    6316 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 11:23:36.832628    6316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:23:36.832831    6316 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 11:23:36.832856    6316 cni.go:84] Creating CNI manager for ""
	I0725 11:23:36.832871    6316 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:23:36.832878    6316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:23:36.832936    6316 start.go:340] cluster config:
	{Name:force-systemd-flag-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:23:36.833026    6316 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:23:36.854330    6316 out.go:177] * Starting "force-systemd-flag-521000" primary control-plane node in "force-systemd-flag-521000" cluster
	I0725 11:23:36.896505    6316 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:23:36.896551    6316 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 11:23:36.896566    6316 cache.go:56] Caching tarball of preloaded images
	I0725 11:23:36.896690    6316 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 11:23:36.896700    6316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:23:36.896777    6316 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/force-systemd-flag-521000/config.json ...
	I0725 11:23:36.896799    6316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/force-systemd-flag-521000/config.json: {Name:mk053ca52de790d9f032f3dcb960903dc75df06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:23:36.897160    6316 start.go:360] acquireMachinesLock for force-systemd-flag-521000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:24:33.863656    6316 start.go:364] duration metric: took 56.965425294s to acquireMachinesLock for "force-systemd-flag-521000"
	I0725 11:24:33.863720    6316 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:24:33.863797    6316 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:24:33.885170    6316 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:24:33.885344    6316 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:24:33.885379    6316 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:24:33.893997    6316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53859
	I0725 11:24:33.894475    6316 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:24:33.895021    6316 main.go:141] libmachine: Using API Version  1
	I0725 11:24:33.895030    6316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:24:33.895255    6316 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:24:33.895387    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .GetMachineName
	I0725 11:24:33.895483    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .DriverName
	I0725 11:24:33.895602    6316 start.go:159] libmachine.API.Create for "force-systemd-flag-521000" (driver="hyperkit")
	I0725 11:24:33.895662    6316 client.go:168] LocalClient.Create starting
	I0725 11:24:33.895696    6316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:24:33.895748    6316 main.go:141] libmachine: Decoding PEM data...
	I0725 11:24:33.895766    6316 main.go:141] libmachine: Parsing certificate...
	I0725 11:24:33.895824    6316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:24:33.895863    6316 main.go:141] libmachine: Decoding PEM data...
	I0725 11:24:33.895871    6316 main.go:141] libmachine: Parsing certificate...
	I0725 11:24:33.895884    6316 main.go:141] libmachine: Running pre-create checks...
	I0725 11:24:33.895892    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .PreCreateCheck
	I0725 11:24:33.895982    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:33.896187    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .GetConfigRaw
	I0725 11:24:33.906155    6316 main.go:141] libmachine: Creating machine...
	I0725 11:24:33.906164    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .Create
	I0725 11:24:33.906247    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:33.906403    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:24:33.906240    6362 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:24:33.906450    6316 main.go:141] libmachine: (force-systemd-flag-521000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:24:34.328518    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:24:34.328448    6362 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/id_rsa...
	I0725 11:24:34.440228    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:24:34.440169    6362 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk...
	I0725 11:24:34.440245    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Writing magic tar header
	I0725 11:24:34.440256    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Writing SSH key tar header
	I0725 11:24:34.440539    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:24:34.440504    6362 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000 ...
	I0725 11:24:34.861185    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:34.861209    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid
	I0725 11:24:34.861221    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Using UUID 98b96ece-da87-4ea0-8c83-3d5cc6aabc70
	I0725 11:24:34.886806    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Generated MAC 9e:e4:94:77:7a:4c
	I0725 11:24:34.886825    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000
	I0725 11:24:34.886862    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"98b96ece-da87-4ea0-8c83-3d5cc6aabc70", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:24:34.886890    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"98b96ece-da87-4ea0-8c83-3d5cc6aabc70", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:24:34.886950    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "98b96ece-da87-4ea0-8c83-3d5cc6aabc70", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/fo
rce-systemd-flag-521000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000"}
	I0725 11:24:34.886993    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 98b96ece-da87-4ea0-8c83-3d5cc6aabc70 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage,/Users/jenkins/minikube-integr
ation/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000"
	I0725 11:24:34.887027    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:24:34.889871    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 DEBUG: hyperkit: Pid is 6376
	I0725 11:24:34.890882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 0
	I0725 11:24:34.890904    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:34.890990    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:34.891895    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:34.891965    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:34.891982    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:34.892000    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:34.892018    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:34.892047    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:34.892070    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:34.892086    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:34.892111    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:34.892128    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:34.892184    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:34.892210    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:34.892225    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:34.892267    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:34.892285    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:34.892294    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:34.892302    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:34.892310    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:34.892317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:34.892325    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:34.897759    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:24:34.906297    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:24:34.907171    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:24:34.907192    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:24:34.907204    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:24:34.907214    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:24:35.284410    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:24:35.284426    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:24:35.399037    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:24:35.399064    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:24:35.399090    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:24:35.399115    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:24:35.399921    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:24:35.399931    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:24:36.893572    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 1
	I0725 11:24:36.893591    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:36.893631    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:36.894453    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:36.894512    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:36.894520    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:36.894529    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:36.894535    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:36.894543    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:36.894549    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:36.894561    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:36.894572    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:36.894588    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:36.894598    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:36.894625    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:36.894638    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:36.894646    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:36.894654    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:36.894670    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:36.894679    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:36.894687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:36.894693    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:36.894702    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:38.895658    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 2
	I0725 11:24:38.895687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:38.895818    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:38.896618    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:38.896662    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:38.896678    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:38.896689    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:38.896712    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:38.896727    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:38.896734    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:38.896751    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:38.896761    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:38.896772    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:38.896779    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:38.896786    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:38.896792    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:38.896808    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:38.896816    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:38.896823    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:38.896831    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:38.896846    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:38.896861    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:38.896871    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:40.787217    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0725 11:24:40.787400    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0725 11:24:40.787414    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0725 11:24:40.807340    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0725 11:24:40.899063    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 3
	I0725 11:24:40.899087    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:40.899302    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:40.900746    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:40.900881    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:40.900906    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:40.900948    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:40.900967    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:40.900986    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:40.901012    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:40.901021    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:40.901029    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:40.901048    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:40.901065    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:40.901089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:40.901100    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:40.901118    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:40.901132    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:40.901141    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:40.901154    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:40.901178    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:40.901194    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:40.901208    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:42.901182    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 4
	I0725 11:24:42.901200    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:42.901265    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:42.902049    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:42.902104    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:42.902112    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:42.902120    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:42.902126    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:42.902132    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:42.902143    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:42.902150    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:42.902157    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:42.902164    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:42.902172    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:42.902188    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:42.902199    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:42.902207    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:42.902215    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:42.902223    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:42.902232    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:42.902240    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:42.902248    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:42.902257    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:44.902351    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 5
	I0725 11:24:44.902364    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:44.902413    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:44.903226    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:44.903254    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:44.903273    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:44.903284    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:44.903294    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:44.903300    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:44.903307    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:44.903313    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:44.903334    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:44.903350    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:44.903359    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:44.903365    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:44.903373    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:44.903381    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:44.903402    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:44.903410    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:44.903436    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:44.903448    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:44.903454    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:44.903470    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:46.903987    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 6
	I0725 11:24:46.904001    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:46.904062    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:46.904834    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:46.904886    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:46.904902    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:46.904920    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:46.904940    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:46.904954    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:46.904965    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:46.904972    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:46.904981    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:46.904997    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:46.905006    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:46.905017    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:46.905026    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:46.905037    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:46.905045    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:46.905052    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:46.905065    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:46.905072    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:46.905080    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:46.905089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:48.905229    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 7
	I0725 11:24:48.905245    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:48.905288    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:48.906106    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:48.906213    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:48.906224    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:48.906234    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:48.906240    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:48.906250    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:48.906259    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:48.906279    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:48.906292    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:48.906301    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:48.906310    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:48.906317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:48.906323    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:48.906335    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:48.906350    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:48.906362    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:48.906370    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:48.906379    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:48.906386    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:48.906404    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:50.907065    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 8
	I0725 11:24:50.907079    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:50.907180    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:50.907996    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:50.908050    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:50.908060    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:50.908068    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:50.908074    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:50.908083    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:50.908089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:50.908096    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:50.908102    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:50.908112    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:50.908121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:50.908128    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:50.908140    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:50.908150    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:50.908157    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:50.908175    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:50.908192    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:50.908204    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:50.908213    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:50.908221    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:52.908275    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 9
	I0725 11:24:52.908290    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:52.908386    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:52.909147    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:52.909201    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:52.909210    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:52.909222    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:52.909231    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:52.909238    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:52.909245    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:52.909252    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:52.909257    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:52.909270    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:52.909285    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:52.909294    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:52.909301    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:52.909307    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:52.909316    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:52.909324    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:52.909333    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:52.909340    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:52.909347    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:52.909356    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:54.911056    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 10
	I0725 11:24:54.911089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:54.911145    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:54.911914    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:54.911962    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:54.911977    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:54.911989    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:54.911996    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:54.912016    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:54.912026    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:54.912033    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:54.912041    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:54.912050    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:54.912063    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:54.912076    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:54.912089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:54.912097    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:54.912103    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:54.912110    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:54.912119    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:54.912127    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:54.912135    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:54.912155    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:56.912529    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 11
	I0725 11:24:56.912544    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:56.912609    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:56.913393    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:56.913435    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:56.913445    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:56.913454    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:56.913461    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:56.913469    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:56.913479    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:56.913487    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:56.913501    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:56.913508    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:56.913524    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:56.913532    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:56.913542    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:56.913552    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:56.913561    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:56.913568    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:56.913574    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:56.913584    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:56.913592    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:56.913600    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:58.915662    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 12
	I0725 11:24:58.915677    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:58.915802    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:24:58.916571    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:24:58.916620    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:58.916631    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:58.916640    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:58.916648    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:58.916658    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:58.916667    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:58.916674    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:58.916684    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:58.916697    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:58.916707    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:58.916715    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:58.916723    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:58.916731    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:58.916738    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:58.916746    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:58.916760    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:58.916774    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:58.916783    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:58.916791    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:00.917452    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 13
	I0725 11:25:00.917466    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:00.917539    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:00.918357    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:00.918365    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:00.918376    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:00.918385    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:00.918393    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:00.918406    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:00.918419    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:00.918426    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:00.918434    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:00.918441    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:00.918448    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:00.918456    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:00.918464    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:00.918471    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:00.918478    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:00.918483    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:00.918496    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:00.918513    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:00.918521    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:00.918530    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:02.920602    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 14
	I0725 11:25:02.920636    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:02.920716    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:02.921651    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:02.921696    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:02.921707    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:02.921714    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:02.921724    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:02.921739    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:02.921749    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:02.921773    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:02.921787    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:02.921796    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:02.921803    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:02.921826    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:02.921839    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:02.921855    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:02.921866    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:02.921878    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:02.921888    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:02.921900    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:02.921913    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:02.921924    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:04.922232    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 15
	I0725 11:25:04.922246    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:04.922311    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:04.923081    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:04.923116    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:04.923138    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:04.923156    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:04.923166    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:04.923188    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:04.923208    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:04.923217    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:04.923231    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:04.923240    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:04.923254    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:04.923266    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:04.923279    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:04.923293    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:04.923303    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:04.923311    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:04.923318    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:04.923325    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:04.923340    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:04.923352    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:06.924697    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 16
	I0725 11:25:06.924712    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:06.924797    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:06.925569    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:06.925631    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:06.925641    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:06.925648    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:06.925655    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:06.925671    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:06.925685    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:06.925694    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:06.925703    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:06.925721    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:06.925727    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:06.925734    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:06.925743    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:06.925758    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:06.925774    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:06.925782    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:06.925789    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:06.925797    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:06.925806    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:06.925816    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:08.926981    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 17
	I0725 11:25:08.926994    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:08.927149    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:08.928101    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:08.928152    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:08.928165    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:08.928176    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:08.928183    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:08.928191    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:08.928196    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:08.928202    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:08.928209    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:08.928217    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:08.928225    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:08.928232    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:08.928239    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:08.928248    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:08.928256    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:08.928266    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:08.928274    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:08.928283    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:08.928299    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:08.928311    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:10.930360    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 18
	I0725 11:25:10.930376    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:10.930431    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:10.931242    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:10.931304    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:10.931316    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:10.931327    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:10.931334    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:10.931349    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:10.931360    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:10.931368    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:10.931376    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:10.931386    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:10.931396    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:10.931415    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:10.931426    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:10.931446    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:10.931453    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:10.931459    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:10.931469    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:10.931475    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:10.931484    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:10.931500    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:12.932443    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 19
	I0725 11:25:12.932460    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:12.932489    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:12.933281    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:12.933298    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:12.933310    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:12.933317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:12.933324    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:12.933333    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:12.933339    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:12.933346    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:12.933368    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:12.933380    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:12.933388    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:12.933395    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:12.933408    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:12.933420    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:12.933433    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:12.933442    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:12.933457    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:12.933466    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:12.933481    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:12.933494    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:14.934700    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 20
	I0725 11:25:14.934715    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:14.934810    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:14.935859    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:14.935909    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:14.935926    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:14.935948    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:14.935958    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:14.935969    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:14.935976    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:14.935982    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:14.935995    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:14.936010    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:14.936023    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:14.936032    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:14.936042    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:14.936050    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:14.936058    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:14.936065    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:14.936073    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:14.936089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:14.936102    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:14.936112    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:16.938108    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 21
	I0725 11:25:16.938125    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:16.938194    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:16.938964    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:16.939025    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:16.939033    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:16.939050    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:16.939057    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:16.939063    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:16.939069    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:16.939085    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:16.939110    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:16.939121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:16.939129    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:16.939137    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:16.939145    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:16.939153    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:16.939171    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:16.939178    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:16.939185    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:16.939190    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:16.939197    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:16.939204    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:18.941211    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 22
	I0725 11:25:18.941224    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:18.941282    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:18.942208    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:18.942262    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:18.942276    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:18.942284    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:18.942290    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:18.942307    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:18.942319    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:18.942327    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:18.942335    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:18.942342    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:18.942350    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:18.942357    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:18.942362    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:18.942374    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:18.942380    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:18.942387    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:18.942395    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:18.942408    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:18.942418    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:18.942435    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:20.944474    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 23
	I0725 11:25:20.944489    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:20.944633    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:20.945462    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:20.945521    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:20.945529    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:20.945539    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:20.945545    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:20.945571    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:20.945584    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:20.945592    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:20.945600    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:20.945607    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:20.945615    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:20.945636    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:20.945662    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:20.945672    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:20.945680    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:20.945687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:20.945693    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:20.945708    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:20.945721    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:20.945740    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:22.946961    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 24
	I0725 11:25:22.946978    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:22.947045    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:22.947826    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:22.947876    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:22.947891    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:22.947908    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:22.947932    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:22.947941    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:22.947949    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:22.947966    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:22.947978    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:22.947986    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:22.947993    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:22.947998    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:22.948006    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:22.948013    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:22.948019    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:22.948025    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:22.948031    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:22.948038    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:22.948060    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:22.948069    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:24.950152    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 25
	I0725 11:25:24.950166    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:24.950217    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:24.951008    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:24.951063    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:24.951084    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:24.951092    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:24.951099    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:24.951117    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:24.951125    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:24.951132    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:24.951149    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:24.951157    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:24.951164    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:24.951171    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:24.951179    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:24.951195    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:24.951204    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:24.951213    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:24.951222    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:24.951237    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:24.951251    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:24.951261    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:26.953302    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 26
	I0725 11:25:26.953317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:26.953370    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:26.954212    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:26.954226    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:26.954236    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:26.954243    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:26.954251    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:26.954258    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:26.954274    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:26.954293    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:26.954302    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:26.954311    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:26.954328    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:26.954341    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:26.954350    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:26.954359    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:26.954366    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:26.954374    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:26.954397    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:26.954410    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:26.954424    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:26.954446    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:28.956485    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 27
	I0725 11:25:28.956501    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:28.956535    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:28.957353    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:28.957374    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:28.957390    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:28.957401    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:28.957425    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:28.957435    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:28.957446    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:28.957455    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:28.957465    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:28.957482    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:28.957490    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:28.957497    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:28.957506    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:28.957512    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:28.957518    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:28.957527    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:28.957534    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:28.957541    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:28.957549    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:28.957557    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:30.959628    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 28
	I0725 11:25:30.959642    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:30.959694    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:30.960487    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:30.960539    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:30.960552    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:30.960566    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:30.960576    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:30.960585    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:30.960593    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:30.960610    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:30.960623    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:30.960631    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:30.960639    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:30.960647    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:30.960655    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:30.960675    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:30.960687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:30.960697    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:30.960710    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:30.960718    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:30.960726    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:30.960745    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:32.961929    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 29
	I0725 11:25:32.961941    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:32.961982    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:32.962773    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 9e:e4:94:77:7a:4c in /var/db/dhcpd_leases ...
	I0725 11:25:32.962810    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:25:32.962819    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:25:32.962827    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:25:32.962834    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:25:32.962843    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:25:32.962850    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:25:32.962862    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:25:32.962869    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:25:32.962876    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:25:32.962882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:25:32.962892    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:25:32.962899    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:25:32.962908    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:25:32.962933    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:25:32.962946    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:25:32.962956    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:25:32.962974    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:25:32.962986    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:25:32.962993    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:25:34.963555    6316 client.go:171] duration metric: took 1m1.06676152s to LocalClient.Create
	I0725 11:25:36.963721    6316 start.go:128] duration metric: took 1m3.098748387s to createHost
	I0725 11:25:36.963755    6316 start.go:83] releasing machines lock for "force-systemd-flag-521000", held for 1m3.098923877s
	W0725 11:25:36.963789    6316 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:e4:94:77:7a:4c
	I0725 11:25:36.964130    6316 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:25:36.964149    6316 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:25:36.973033    6316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53875
	I0725 11:25:36.973591    6316 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:25:36.974089    6316 main.go:141] libmachine: Using API Version  1
	I0725 11:25:36.974133    6316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:25:36.974466    6316 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:25:36.974831    6316 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:25:36.974867    6316 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:25:36.983454    6316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53877
	I0725 11:25:36.983964    6316 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:25:36.984431    6316 main.go:141] libmachine: Using API Version  1
	I0725 11:25:36.984452    6316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:25:36.984726    6316 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:25:36.984865    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .GetState
	I0725 11:25:36.984952    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:36.985031    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:36.986053    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .DriverName
	I0725 11:25:37.007082    6316 out.go:177] * Deleting "force-systemd-flag-521000" in hyperkit ...
	I0725 11:25:37.049157    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .Remove
	I0725 11:25:37.049317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.049331    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.049400    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:37.050341    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:37.050397    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | waiting for graceful shutdown
	I0725 11:25:38.051093    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:38.051231    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:38.052124    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | waiting for graceful shutdown
	I0725 11:25:39.052945    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:39.053014    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:39.054739    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | waiting for graceful shutdown
	I0725 11:25:40.055399    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:40.055483    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:40.056251    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | waiting for graceful shutdown
	I0725 11:25:41.057551    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:41.057639    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:41.058184    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | waiting for graceful shutdown
	I0725 11:25:42.060312    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:25:42.060422    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6376
	I0725 11:25:42.061472    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | sending sigkill
	I0725 11:25:42.061492    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0725 11:25:42.071358    6316 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:e4:94:77:7a:4c
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:e4:94:77:7a:4c
	I0725 11:25:42.071378    6316 start.go:729] Will try again in 5 seconds ...
	I0725 11:25:42.103104    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:25:42 WARN : hyperkit: failed to read stdout: EOF
	I0725 11:25:42.103125    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:25:42 WARN : hyperkit: failed to read stderr: EOF
	I0725 11:25:47.073367    6316 start.go:360] acquireMachinesLock for force-systemd-flag-521000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:26:39.775382    6316 start.go:364] duration metric: took 52.701008354s to acquireMachinesLock for "force-systemd-flag-521000"
	I0725 11:26:39.775413    6316 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:26:39.775465    6316 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:26:39.797141    6316 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:26:39.797216    6316 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:26:39.797236    6316 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:26:39.805709    6316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53885
	I0725 11:26:39.806035    6316 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:26:39.806408    6316 main.go:141] libmachine: Using API Version  1
	I0725 11:26:39.806429    6316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:26:39.806628    6316 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:26:39.806725    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .GetMachineName
	I0725 11:26:39.806815    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .DriverName
	I0725 11:26:39.806909    6316 start.go:159] libmachine.API.Create for "force-systemd-flag-521000" (driver="hyperkit")
	I0725 11:26:39.806921    6316 client.go:168] LocalClient.Create starting
	I0725 11:26:39.806948    6316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:26:39.807002    6316 main.go:141] libmachine: Decoding PEM data...
	I0725 11:26:39.807016    6316 main.go:141] libmachine: Parsing certificate...
	I0725 11:26:39.807057    6316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:26:39.807096    6316 main.go:141] libmachine: Decoding PEM data...
	I0725 11:26:39.807103    6316 main.go:141] libmachine: Parsing certificate...
	I0725 11:26:39.807115    6316 main.go:141] libmachine: Running pre-create checks...
	I0725 11:26:39.807121    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .PreCreateCheck
	I0725 11:26:39.807200    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.807215    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .GetConfigRaw
	I0725 11:26:39.859849    6316 main.go:141] libmachine: Creating machine...
	I0725 11:26:39.859859    6316 main.go:141] libmachine: (force-systemd-flag-521000) Calling .Create
	I0725 11:26:39.859964    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:39.860121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:26:39.859959    6490 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:26:39.860153    6316 main.go:141] libmachine: (force-systemd-flag-521000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:26:40.063664    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:26:40.063564    6490 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/id_rsa...
	I0725 11:26:40.161255    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:26:40.161185    6490 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk...
	I0725 11:26:40.161265    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Writing magic tar header
	I0725 11:26:40.161273    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Writing SSH key tar header
	I0725 11:26:40.161887    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | I0725 11:26:40.161851    6490 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000 ...
	I0725 11:26:40.533877    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:40.533897    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid
	I0725 11:26:40.533917    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Using UUID 7be17994-b7cf-4e19-b1ee-d0834eda0821
	I0725 11:26:40.559624    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Generated MAC 52:3b:65:72:1f:2c
	I0725 11:26:40.559639    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000
	I0725 11:26:40.559681    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7be17994-b7cf-4e19-b1ee-d0834eda0821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:26:40.559726    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7be17994-b7cf-4e19-b1ee-d0834eda0821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:26:40.559782    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7be17994-b7cf-4e19-b1ee-d0834eda0821", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/fo
rce-systemd-flag-521000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000"}
	I0725 11:26:40.559831    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7be17994-b7cf-4e19-b1ee-d0834eda0821 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/force-systemd-flag-521000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/bzimage,/Users/jenkins/minikube-integr
ation/19326-1195/.minikube/machines/force-systemd-flag-521000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-521000"
	I0725 11:26:40.559847    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:26:40.562878    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 DEBUG: hyperkit: Pid is 6491
	I0725 11:26:40.563336    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 0
	I0725 11:26:40.563353    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:40.563440    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:40.564408    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:40.564483    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:40.564500    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:40.564506    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:40.564514    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:40.564520    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:40.564528    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:40.564537    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:40.564548    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:40.564560    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:40.564593    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:40.564616    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:40.564631    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:40.564656    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:40.564667    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:40.564678    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:40.564690    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:40.564702    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:40.564714    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:40.564727    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:40.570501    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:26:40.578705    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-flag-521000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:26:40.579684    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:26:40.579714    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:26:40.579734    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:26:40.579753    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:26:40.957454    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:26:40.957474    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:26:41.072077    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:26:41.072117    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:26:41.072133    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:26:41.072146    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:26:41.072973    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:26:41.072986    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:26:42.566670    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 1
	I0725 11:26:42.566687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:42.566804    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:42.567596    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:42.567645    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:42.567655    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:42.567681    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:42.567697    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:42.567715    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:42.567723    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:42.567741    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:42.567753    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:42.567761    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:42.567769    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:42.567777    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:42.567785    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:42.567792    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:42.567811    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:42.567821    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:42.567829    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:42.567844    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:42.567853    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:42.567862    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:44.569613    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 2
	I0725 11:26:44.569639    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:44.569705    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:44.570516    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:44.570544    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:44.570551    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:44.570574    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:44.570585    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:44.570592    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:44.570600    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:44.570624    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:44.570638    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:44.570648    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:44.570656    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:44.570670    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:44.570685    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:44.570693    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:44.570702    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:44.570714    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:44.570725    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:44.570732    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:44.570740    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:44.570749    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:46.480406    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0725 11:26:46.480522    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0725 11:26:46.480533    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0725 11:26:46.500504    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | 2024/07/25 11:26:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0725 11:26:46.572314    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 3
	I0725 11:26:46.572341    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:46.572506    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:46.573949    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:46.574058    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:46.574089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:46.574132    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:46.574159    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:46.574177    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:46.574188    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:46.574202    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:46.574228    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:46.574245    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:46.574284    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:46.574303    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:46.574317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:46.574347    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:46.574365    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:46.574375    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:46.574387    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:46.574396    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:46.574407    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:46.574419    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:48.574451    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 4
	I0725 11:26:48.574470    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:48.574572    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:48.575383    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:48.575406    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:48.575417    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:48.575432    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:48.575444    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:48.575453    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:48.575460    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:48.575467    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:48.575475    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:48.575489    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:48.575496    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:48.575504    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:48.575512    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:48.575519    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:48.575526    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:48.575537    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:48.575545    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:48.575555    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:48.575564    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:48.575573    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:50.576210    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 5
	I0725 11:26:50.576225    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:50.576369    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:50.577178    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:50.577230    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:50.577240    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:50.577249    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:50.577256    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:50.577284    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:50.577298    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:50.577308    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:50.577317    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:50.577327    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:50.577337    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:50.577346    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:50.577354    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:50.577362    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:50.577369    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:50.577377    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:50.577384    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:50.577391    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:50.577399    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:50.577411    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:52.578231    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 6
	I0725 11:26:52.578243    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:52.578333    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:52.579088    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:52.579148    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:52.579160    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:52.579170    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:52.579181    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:52.579191    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:52.579200    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:52.579226    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:52.579242    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:52.579258    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:52.579266    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:52.579275    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:52.579282    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:52.579293    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:52.579305    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:52.579314    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:52.579323    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:52.579331    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:52.579343    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:52.579355    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:54.581392    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 7
	I0725 11:26:54.581408    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:54.581456    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:54.582322    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:54.582364    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:54.582376    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:54.582385    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:54.582392    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:54.582409    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:54.582418    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:54.582448    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:54.582457    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:54.582464    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:54.582471    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:54.582483    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:54.582496    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:54.582504    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:54.582512    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:54.582519    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:54.582531    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:54.582540    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:54.582549    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:54.582558    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:56.583624    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 8
	I0725 11:26:56.583636    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:56.583713    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:56.584476    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:56.584532    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:56.584544    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:56.584554    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:56.584560    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:56.584566    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:56.584573    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:56.584579    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:56.584585    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:56.584602    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:56.584611    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:56.584619    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:56.584628    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:56.584647    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:56.584655    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:56.584662    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:56.584670    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:56.584677    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:56.584685    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:56.584693    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:26:58.585214    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 9
	I0725 11:26:58.585229    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:26:58.585288    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:26:58.586152    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:26:58.586200    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:26:58.586213    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:26:58.586230    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:26:58.586245    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:26:58.586254    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:26:58.586262    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:26:58.586277    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:26:58.586286    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:26:58.586293    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:26:58.586301    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:26:58.586308    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:26:58.586316    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:26:58.586325    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:26:58.586332    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:26:58.586339    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:26:58.586347    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:26:58.586363    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:26:58.586376    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:26:58.586387    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:00.588542    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 10
	I0725 11:27:00.588567    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:00.588608    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:00.589592    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:00.589661    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:00.589672    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:00.589681    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:00.589688    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:00.589694    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:00.589701    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:00.589707    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:00.589716    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:00.589723    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:00.589731    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:00.589740    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:00.589756    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:00.589764    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:00.589769    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:00.589776    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:00.589784    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:00.589800    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:00.589814    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:00.589824    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:02.590421    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 11
	I0725 11:27:02.590440    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:02.590514    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:02.591261    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:02.591294    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:02.591314    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:02.591325    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:02.591335    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:02.591344    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:02.591351    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:02.591365    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:02.591374    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:02.591381    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:02.591389    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:02.591396    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:02.591423    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:02.591439    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:02.591448    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:02.591459    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:02.591467    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:02.591475    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:02.591483    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:02.591492    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:04.592964    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 12
	I0725 11:27:04.592981    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:04.593040    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:04.593828    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:04.593863    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:04.593873    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:04.593882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:04.593900    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:04.593907    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:04.593918    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:04.593924    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:04.593932    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:04.593941    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:04.593948    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:04.593956    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:04.593963    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:04.593971    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:04.593986    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:04.593998    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:04.594007    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:04.594015    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:04.594023    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:04.594031    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:06.596084    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 13
	I0725 11:27:06.596098    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:06.596151    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:06.596926    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:06.596969    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:06.596980    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:06.597000    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:06.597008    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:06.597021    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:06.597043    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:06.597058    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:06.597066    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:06.597074    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:06.597081    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:06.597089    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:06.597111    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:06.597121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:06.597135    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:06.597143    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:06.597151    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:06.597159    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:06.597166    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:06.597176    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:08.597851    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 14
	I0725 11:27:08.597867    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:08.597923    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:08.598741    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:08.598753    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:08.598767    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:08.598779    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:08.598788    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:08.598811    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:08.598819    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:08.598828    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:08.598845    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:08.598857    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:08.598866    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:08.598875    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:08.598886    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:08.598895    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:08.598910    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:08.598919    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:08.598926    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:08.598934    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:08.598942    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:08.598948    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:10.600472    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 15
	I0725 11:27:10.600488    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:10.600572    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:10.601463    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:10.601508    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:10.601518    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:10.601530    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:10.601537    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:10.601552    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:10.601560    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:10.601566    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:10.601572    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:10.601590    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:10.601600    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:10.601612    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:10.601621    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:10.601628    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:10.601641    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:10.601648    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:10.601659    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:10.601666    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:10.601675    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:10.601701    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:12.603749    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 16
	I0725 11:27:12.603782    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:12.603875    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:12.604653    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:12.604693    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:12.604705    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:12.604719    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:12.604728    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:12.604739    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:12.604748    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:12.604756    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:12.604764    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:12.604781    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:12.604789    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:12.604797    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:12.604805    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:12.604814    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:12.604820    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:12.604826    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:12.604835    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:12.604857    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:12.604871    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:12.604882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:14.605545    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 17
	I0725 11:27:14.605561    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:14.605725    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:14.606560    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:14.606604    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:14.606614    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:14.606629    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:14.606636    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:14.606644    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:14.606651    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:14.606657    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:14.606665    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:14.606672    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:14.606681    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:14.606700    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:14.606713    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:14.606721    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:14.606728    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:14.606740    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:14.606754    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:14.606774    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:14.606785    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:14.606804    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:16.608792    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 18
	I0725 11:27:16.608805    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:16.608868    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:16.609641    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:16.609750    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:16.609769    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:16.609778    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:16.609785    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:16.609803    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:16.609823    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:16.609832    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:16.609839    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:16.609854    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:16.609870    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:16.609882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:16.609890    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:16.609899    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:16.609908    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:16.609919    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:16.609929    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:16.609937    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:16.609945    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:16.609953    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:18.610462    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 19
	I0725 11:27:18.610490    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:18.610527    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:18.611316    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:18.611351    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:18.611361    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:18.611385    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:18.611394    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:18.611403    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:18.611411    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:18.611417    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:18.611424    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:18.611430    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:18.611438    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:18.611444    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:18.611449    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:18.611461    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:18.611468    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:18.611474    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:18.611482    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:18.611489    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:18.611496    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:18.611513    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:20.611856    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 20
	I0725 11:27:20.611871    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:20.611912    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:20.612741    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:20.612797    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:20.612807    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:20.612822    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:20.612832    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:20.612845    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:20.612851    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:20.612861    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:20.612867    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:20.612874    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:20.612882    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:20.612898    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:20.612910    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:20.612919    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:20.612928    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:20.612936    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:20.612947    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:20.612956    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:20.612963    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:20.612971    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:22.614338    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 21
	I0725 11:27:22.614355    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:22.614416    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:22.615190    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:22.615242    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:22.615254    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:22.615264    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:22.615292    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:22.615308    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:22.615319    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:22.615326    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:22.615334    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:22.615349    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:22.615362    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:22.615378    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:22.615388    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:22.615412    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:22.615429    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:22.615445    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:22.615456    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:22.615464    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:22.615471    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:22.615488    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:24.615969    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 22
	I0725 11:27:24.615985    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:24.616035    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:24.616899    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:24.616946    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:24.616956    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:24.616967    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:24.616983    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:24.616990    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:24.616996    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:24.617003    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:24.617009    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:24.617018    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:24.617026    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:24.617034    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:24.617041    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:24.617047    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:24.617062    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:24.617074    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:24.617088    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:24.617097    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:24.617104    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:24.617115    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:26.618102    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 23
	I0725 11:27:26.618131    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:26.618226    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:26.618975    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:26.619039    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:26.619050    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:26.619065    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:26.619072    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:26.619078    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:26.619085    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:26.619092    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:26.619099    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:26.619105    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:26.619114    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:26.619120    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:26.619126    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:26.619133    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:26.619139    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:26.619148    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:26.619156    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:26.619163    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:26.619171    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:26.619179    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:28.619980    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 24
	I0725 11:27:28.619995    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:28.620136    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:28.620946    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:28.620985    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:28.620992    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:28.621004    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:28.621010    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:28.621017    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:28.621023    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:28.621030    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:28.621036    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:28.621054    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:28.621061    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:28.621078    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:28.621092    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:28.621100    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:28.621108    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:28.621116    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:28.621121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:28.621137    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:28.621152    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:28.621161    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:30.621997    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 25
	I0725 11:27:30.622032    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:30.622132    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:30.622935    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:30.622975    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:30.622997    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:30.623020    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:30.623033    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:30.623053    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:30.623063    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:30.623070    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:30.623079    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:30.623086    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:30.623095    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:30.623103    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:30.623110    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:30.623118    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:30.623123    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:30.623133    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:30.623142    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:30.623148    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:30.623155    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:30.623163    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:32.624966    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 26
	I0725 11:27:32.624979    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:32.625044    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:32.625979    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:32.626025    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:32.626043    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:32.626056    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:32.626062    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:32.626079    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:32.626088    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:32.626096    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:32.626105    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:32.626112    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:32.626121    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:32.626128    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:32.626135    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:32.626143    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:32.626150    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:32.626159    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:32.626166    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:32.626177    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:32.626185    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:32.626194    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:34.628241    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 27
	I0725 11:27:34.628255    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:34.628384    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:34.629276    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:34.629330    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:34.629346    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:34.629361    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:34.629368    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:34.629375    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:34.629384    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:34.629395    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:34.629403    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:34.629422    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:34.629437    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:34.629447    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:34.629455    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:34.629463    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:34.629478    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:34.629491    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:34.629504    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:34.629518    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:34.629526    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:34.629537    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:36.631551    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 28
	I0725 11:27:36.631987    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:36.632006    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:36.632489    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:36.632515    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:36.632543    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:36.632560    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:36.632588    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:36.632609    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:36.632621    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:36.632632    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:36.632687    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:36.632713    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:36.632725    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:36.632734    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:36.632742    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:36.632753    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:36.632763    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:36.632772    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:36.632782    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:36.632790    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:36.632809    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:36.632831    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:38.634755    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Attempt 29
	I0725 11:27:38.634780    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:27:38.634791    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | hyperkit pid from json: 6491
	I0725 11:27:38.635656    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Searching for 52:3b:65:72:1f:2c in /var/db/dhcpd_leases ...
	I0725 11:27:38.635686    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:27:38.635703    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:27:38.635715    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:27:38.635768    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:27:38.635792    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:27:38.635802    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:27:38.635811    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:27:38.635819    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:27:38.635827    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:27:38.635845    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:27:38.635862    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:27:38.635876    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:27:38.635902    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:27:38.635911    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:27:38.635921    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:27:38.635928    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:27:38.635935    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:27:38.635942    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:27:38.635949    6316 main.go:141] libmachine: (force-systemd-flag-521000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:27:40.637969    6316 client.go:171] duration metric: took 1m0.829923507s to LocalClient.Create
	I0725 11:27:42.638261    6316 start.go:128] duration metric: took 1m2.861633094s to createHost
	I0725 11:27:42.638276    6316 start.go:83] releasing machines lock for "force-systemd-flag-521000", held for 1m2.861718998s
	W0725 11:27:42.638363    6316 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-521000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:3b:65:72:1f:2c
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-521000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:3b:65:72:1f:2c
	I0725 11:27:42.701557    6316 out.go:177] 
	W0725 11:27:42.722569    6316 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:3b:65:72:1f:2c
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:3b:65:72:1f:2c
	W0725 11:27:42.722581    6316 out.go:239] * 
	* 
	W0725 11:27:42.723254    6316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:27:42.800596    6316 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-521000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-521000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-521000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (180.387657ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-521000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-521000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-25 11:27:43.09662 -0700 PDT m=+3580.144881805
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-521000 -n force-systemd-flag-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-521000 -n force-systemd-flag-521000: exit status 7 (77.797831ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:27:43.172518    6541 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:27:43.172542    6541 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-521000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-521000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-521000: (5.249229971s)
--- FAIL: TestForceSystemdFlag (251.96s)

                                                
                                    
x
+
TestForceSystemdEnv (233.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-531000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0725 11:21:24.642790    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:22:20.235405    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-531000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.162541862s)

                                                
                                                
-- stdout --
	* [force-systemd-env-531000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-531000" primary control-plane node in "force-systemd-env-531000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-531000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:45.918648    6143 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:45.918810    6143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:45.918816    6143 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:45.918820    6143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:45.919009    6143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:20:45.920613    6143 out.go:298] Setting JSON to false
	I0725 11:20:45.942800    6143 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4815,"bootTime":1721926830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 11:20:45.942882    6143 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:45.965057    6143 out.go:177] * [force-systemd-env-531000] minikube v1.33.1 on Darwin 14.5
	I0725 11:20:46.007532    6143 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:46.007620    6143 notify.go:220] Checking for updates...
	I0725 11:20:46.049530    6143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 11:20:46.070522    6143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 11:20:46.091413    6143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:46.112477    6143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:20:46.133532    6143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0725 11:20:46.154906    6143 config.go:182] Loaded profile config "offline-docker-633000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:46.154988    6143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:46.196632    6143 out.go:177] * Using the hyperkit driver based on user configuration
	I0725 11:20:46.217426    6143 start.go:297] selected driver: hyperkit
	I0725 11:20:46.217437    6143 start.go:901] validating driver "hyperkit" against <nil>
	I0725 11:20:46.217447    6143 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:46.220226    6143 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:46.220343    6143 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 11:20:46.228467    6143 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 11:20:46.232302    6143 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:20:46.232333    6143 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 11:20:46.232372    6143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:46.232555    6143 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 11:20:46.232604    6143 cni.go:84] Creating CNI manager for ""
	I0725 11:20:46.232637    6143 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:20:46.232650    6143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:20:46.232735    6143 start.go:340] cluster config:
	{Name:force-systemd-env-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:46.232824    6143 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:46.274525    6143 out.go:177] * Starting "force-systemd-env-531000" primary control-plane node in "force-systemd-env-531000" cluster
	I0725 11:20:46.295464    6143 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:46.295488    6143 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 11:20:46.295502    6143 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:46.295604    6143 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 11:20:46.295613    6143 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:46.295688    6143 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/force-systemd-env-531000/config.json ...
	I0725 11:20:46.295705    6143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/force-systemd-env-531000/config.json: {Name:mkc1109edd551b8742b677a1268f62730bf558ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:46.296001    6143 start.go:360] acquireMachinesLock for force-systemd-env-531000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:25.029663    6143 start.go:364] duration metric: took 38.732921171s to acquireMachinesLock for "force-systemd-env-531000"
	I0725 11:21:25.029718    6143 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:25.029792    6143 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:21:25.051333    6143 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:21:25.051491    6143 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:21:25.051528    6143 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:21:25.060491    6143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53839
	I0725 11:21:25.060939    6143 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:21:25.061545    6143 main.go:141] libmachine: Using API Version  1
	I0725 11:21:25.061554    6143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:21:25.061775    6143 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:21:25.061934    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .GetMachineName
	I0725 11:21:25.062030    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .DriverName
	I0725 11:21:25.062182    6143 start.go:159] libmachine.API.Create for "force-systemd-env-531000" (driver="hyperkit")
	I0725 11:21:25.062205    6143 client.go:168] LocalClient.Create starting
	I0725 11:21:25.062240    6143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:21:25.062333    6143 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:25.062362    6143 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:25.062420    6143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:21:25.062458    6143 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:25.062470    6143 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:25.062485    6143 main.go:141] libmachine: Running pre-create checks...
	I0725 11:21:25.062494    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .PreCreateCheck
	I0725 11:21:25.062571    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.062718    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .GetConfigRaw
	I0725 11:21:25.113948    6143 main.go:141] libmachine: Creating machine...
	I0725 11:21:25.113973    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .Create
	I0725 11:21:25.114100    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.114227    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:21:25.114069    6202 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:21:25.114280    6143 main.go:141] libmachine: (force-systemd-env-531000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:21:25.322521    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:21:25.322429    6202 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/id_rsa...
	I0725 11:21:25.395630    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:21:25.395541    6202 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk...
	I0725 11:21:25.395649    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Writing magic tar header
	I0725 11:21:25.395670    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Writing SSH key tar header
	I0725 11:21:25.396259    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:21:25.396217    6202 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000 ...
	I0725 11:21:25.787222    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.787239    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid
	I0725 11:21:25.787250    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Using UUID fe790a93-2344-4a25-84f3-c532ef0cc8d0
	I0725 11:21:25.812486    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Generated MAC 72:a9:ee:21:ad:f
	I0725 11:21:25.812510    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000
	I0725 11:21:25.812552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe790a93-2344-4a25-84f3-c532ef0cc8d0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:21:25.812592    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe790a93-2344-4a25-84f3-c532ef0cc8d0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:21:25.812639    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fe790a93-2344-4a25-84f3-c532ef0cc8d0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-sys
temd-env-531000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000"}
	I0725 11:21:25.812677    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fe790a93-2344-4a25-84f3-c532ef0cc8d0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage,/Users/jenkins/minikube-integration/19
326-1195/.minikube/machines/force-systemd-env-531000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000"
	I0725 11:21:25.812689    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:21:25.815536    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 DEBUG: hyperkit: Pid is 6203
	I0725 11:21:25.816510    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 0
	I0725 11:21:25.816525    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:25.816593    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:25.817516    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:25.817588    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:25.817608    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:25.817624    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:25.817636    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:25.817655    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:25.817662    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:25.817674    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:25.817687    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:25.817697    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:25.817708    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:25.817720    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:25.817728    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:25.817736    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:25.817747    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:25.817758    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:25.817766    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:25.817775    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:25.817783    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:25.817792    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:25.823051    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:21:25.831024    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:21:25.831943    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:21:25.831987    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:21:25.832002    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:21:25.832037    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:21:26.212335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:21:26.212350    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:21:26.326922    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:21:26.326945    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:21:26.326965    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:21:26.326980    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:21:26.327813    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:21:26.327824    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:21:27.818198    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 1
	I0725 11:21:27.818212    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:27.818270    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:27.819077    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:27.819130    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:27.819146    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:27.819171    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:27.819182    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:27.819196    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:27.819206    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:27.819212    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:27.819218    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:27.819227    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:27.819244    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:27.819257    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:27.819265    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:27.819271    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:27.819278    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:27.819286    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:27.819295    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:27.819302    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:27.819310    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:27.819318    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:29.819501    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 2
	I0725 11:21:29.819515    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:29.819572    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:29.820396    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:29.820409    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:29.820415    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:29.820424    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:29.820433    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:29.820440    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:29.820446    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:29.820452    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:29.820458    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:29.820489    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:29.820497    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:29.820509    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:29.820519    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:29.820526    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:29.820533    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:29.820540    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:29.820546    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:29.820553    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:29.820558    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:29.820566    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:31.710488    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 11:21:31.710741    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 11:21:31.710776    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 11:21:31.731007    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 11:21:31.822269    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 3
	I0725 11:21:31.822296    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:31.822552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:31.824007    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:31.824130    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:31.824152    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:31.824189    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:31.824216    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:31.824264    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:31.824277    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:31.824293    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:31.824304    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:31.824325    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:31.824335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:31.824352    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:31.824363    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:31.824373    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:31.824385    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:31.824399    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:31.824410    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:31.824430    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:31.824446    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:31.824458    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:33.824481    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 4
	I0725 11:21:33.824510    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:33.824604    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:33.825395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:33.825457    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:33.825473    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:33.825498    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:33.825506    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:33.825514    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:33.825524    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:33.825539    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:33.825552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:33.825560    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:33.825568    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:33.825581    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:33.825593    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:33.825600    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:33.825617    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:33.825629    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:33.825643    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:33.825653    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:33.825661    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:33.825670    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:35.827698    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 5
	I0725 11:21:35.827711    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:35.827785    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:35.828578    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:35.828630    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:35.828641    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:35.828652    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:35.828660    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:35.828675    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:35.828684    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:35.828700    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:35.828712    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:35.828721    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:35.828730    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:35.828737    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:35.828745    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:35.828754    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:35.828760    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:35.828766    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:35.828773    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:35.828781    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:35.828788    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:35.828796    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:37.830085    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 6
	I0725 11:21:37.830101    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:37.830176    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:37.830966    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:37.831009    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:37.831017    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:37.831024    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:37.831032    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:37.831041    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:37.831046    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:37.831053    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:37.831061    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:37.831074    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:37.831082    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:37.831089    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:37.831096    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:37.831104    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:37.831110    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:37.831117    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:37.831125    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:37.831131    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:37.831139    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:37.831147    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:39.833204    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 7
	I0725 11:21:39.833231    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:39.833340    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:39.834140    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:39.834188    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:39.834200    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:39.834215    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:39.834224    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:39.834232    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:39.834239    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:39.834259    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:39.834272    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:39.834280    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:39.834286    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:39.834321    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:39.834334    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:39.834342    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:39.834348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:39.834380    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:39.834395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:39.834405    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:39.834417    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:39.834431    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:41.834881    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 8
	I0725 11:21:41.834897    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:41.834964    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:41.835801    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:41.835873    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:41.835907    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:41.835914    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:41.835925    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:41.835933    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:41.835940    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:41.835946    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:41.835953    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:41.835961    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:41.835969    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:41.835976    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:41.835982    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:41.835988    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:41.835996    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:41.836003    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:41.836011    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:41.836017    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:41.836025    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:41.836033    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:43.836097    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 9
	I0725 11:21:43.836112    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:43.836185    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:43.836953    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:43.837001    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:43.837010    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:43.837018    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:43.837024    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:43.837033    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:43.837053    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:43.837061    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:43.837068    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:43.837082    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:43.837095    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:43.837102    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:43.837110    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:43.837116    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:43.837123    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:43.837129    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:43.837137    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:43.837149    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:43.837158    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:43.837173    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:45.838366    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 10
	I0725 11:21:45.838381    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:45.838432    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:45.839317    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:45.839384    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:45.839396    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:45.839406    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:45.839414    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:45.839436    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:45.839442    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:45.839449    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:45.839458    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:45.839466    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:45.839478    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:45.839487    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:45.839493    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:45.839507    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:45.839515    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:45.839521    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:45.839527    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:45.839535    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:45.839541    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:45.839549    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:47.840711    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 11
	I0725 11:21:47.840724    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:47.840796    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:47.841606    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:47.841630    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:47.841645    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:47.841657    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:47.841664    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:47.841683    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:47.841697    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:47.841711    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:47.841726    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:47.841734    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:47.841742    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:47.841751    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:47.841758    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:47.841765    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:47.841781    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:47.841789    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:47.841797    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:47.841802    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:47.841811    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:47.841819    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:49.843829    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 12
	I0725 11:21:49.843844    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:49.843904    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:49.844761    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:49.844797    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:49.844807    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:49.844815    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:49.844822    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:49.844836    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:49.844843    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:49.844851    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:49.844860    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:49.844867    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:49.844875    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:49.844892    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:49.844903    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:49.844920    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:49.844928    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:49.844936    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:49.844943    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:49.844951    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:49.844956    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:49.844968    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:51.845074    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 13
	I0725 11:21:51.845087    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:51.845191    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:51.846002    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:51.846052    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:51.846060    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:51.846083    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:51.846089    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:51.846096    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:51.846105    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:51.846120    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:51.846134    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:51.846142    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:51.846151    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:51.846162    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:51.846169    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:51.846185    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:51.846198    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:51.846206    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:51.846215    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:51.846226    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:51.846234    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:51.846243    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:53.846310    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 14
	I0725 11:21:53.846326    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:53.846388    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:53.847169    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:53.847217    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:53.847229    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:53.847246    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:53.847275    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:53.847292    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:53.847308    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:53.847317    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:53.847325    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:53.847332    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:53.847340    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:53.847347    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:53.847361    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:53.847372    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:53.847387    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:53.847399    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:53.847408    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:53.847417    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:53.847425    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:53.847433    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:55.848225    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 15
	I0725 11:21:55.848238    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:55.848279    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:55.849069    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:55.849105    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:55.849113    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:55.849136    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:55.849147    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:55.849160    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:55.849178    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:55.849191    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:55.849208    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:55.849222    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:55.849246    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:55.849275    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:55.849284    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:55.849299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:55.849311    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:55.849319    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:55.849336    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:55.849352    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:55.849362    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:55.849371    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:57.849725    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 16
	I0725 11:21:57.849738    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:57.849816    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:57.850628    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:57.850672    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:57.850681    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:57.850690    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:57.850696    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:57.850704    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:57.850710    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:57.850727    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:57.850741    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:57.850749    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:57.850759    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:57.850769    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:57.850777    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:57.850792    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:57.850801    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:57.850808    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:57.850828    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:57.850846    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:57.850859    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:57.850869    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:21:59.852650    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 17
	I0725 11:21:59.852675    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:21:59.852751    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:21:59.853548    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:21:59.853612    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:21:59.853624    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:21:59.853637    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:21:59.853646    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:21:59.853662    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:21:59.853672    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:21:59.853684    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:21:59.853693    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:21:59.853705    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:21:59.853718    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:21:59.853733    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:21:59.853746    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:21:59.853754    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:21:59.853765    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:21:59.853772    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:21:59.853780    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:21:59.853787    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:21:59.853795    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:21:59.853812    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:01.854986    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 18
	I0725 11:22:01.855002    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:01.855102    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:01.855884    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:01.855917    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:01.855937    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:01.855950    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:01.855978    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:01.855992    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:01.856009    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:01.856019    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:01.856027    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:01.856033    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:01.856040    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:01.856069    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:01.856080    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:01.856089    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:01.856097    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:01.856105    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:01.856117    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:01.856126    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:01.856134    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:01.856156    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:03.858173    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 19
	I0725 11:22:03.858187    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:03.858234    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:03.859044    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:03.859114    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:03.859152    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:03.859176    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:03.859189    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:03.859197    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:03.859205    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:03.859212    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:03.859220    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:03.859227    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:03.859235    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:03.859242    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:03.859248    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:03.859265    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:03.859277    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:03.859285    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:03.859292    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:03.859304    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:03.859318    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:03.859328    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:05.860691    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 20
	I0725 11:22:05.860706    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:05.860762    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:05.861534    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:05.861597    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:05.861608    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:05.861615    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:05.861621    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:05.861645    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:05.861657    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:05.861672    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:05.861687    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:05.861695    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:05.861703    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:05.861725    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:05.861736    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:05.861742    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:05.861751    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:05.861757    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:05.861765    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:05.861776    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:05.861784    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:05.861793    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:07.862461    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 21
	I0725 11:22:07.862477    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:07.862579    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:07.863474    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:07.863531    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:07.863540    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:07.863553    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:07.863573    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:07.863591    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:07.863605    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:07.863614    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:07.863620    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:07.863632    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:07.863642    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:07.863660    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:07.863672    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:07.863680    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:07.863688    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:07.863695    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:07.863702    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:07.863716    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:07.863729    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:07.863738    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:09.863928    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 22
	I0725 11:22:09.863941    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:09.864032    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:09.865002    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:09.865052    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:09.865061    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:09.865070    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:09.865079    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:09.865086    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:09.865094    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:09.865102    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:09.865110    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:09.865117    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:09.865126    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:09.865133    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:09.865138    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:09.865155    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:09.865169    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:09.865177    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:09.865186    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:09.865199    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:09.865206    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:09.865214    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:11.865272    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 23
	I0725 11:22:11.865289    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:11.865361    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:11.866278    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:11.866329    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:11.866339    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:11.866348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:11.866357    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:11.866367    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:11.866375    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:11.866383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:11.866389    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:11.866395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:11.866402    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:11.866415    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:11.866423    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:11.866429    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:11.866436    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:11.866444    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:11.866459    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:11.866472    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:11.866481    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:11.866496    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:13.867660    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 24
	I0725 11:22:13.867677    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:13.867715    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:13.868783    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:13.868814    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:13.868825    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:13.868839    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:13.868847    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:13.868853    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:13.868859    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:13.868876    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:13.868885    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:13.868894    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:13.868902    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:13.868909    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:13.868914    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:13.868922    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:13.868930    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:13.868936    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:13.868943    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:13.868949    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:13.868955    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:13.868963    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:15.869198    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 25
	I0725 11:22:15.869211    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:15.869348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:15.870320    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:15.870376    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:15.870387    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:15.870396    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:15.870403    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:15.870419    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:15.870434    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:15.870443    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:15.870451    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:15.870459    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:15.870468    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:15.870475    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:15.870484    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:15.870498    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:15.870520    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:15.870540    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:15.870552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:15.870573    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:15.870582    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:15.870591    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:17.871839    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 26
	I0725 11:22:17.871855    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:17.871950    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:17.872759    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:17.872797    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:17.872810    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:17.872819    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:17.872825    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:17.872834    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:17.872844    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:17.872851    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:17.872859    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:17.872866    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:17.872874    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:17.872881    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:17.872889    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:17.872902    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:17.872909    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:17.872917    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:17.872925    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:17.872932    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:17.872940    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:17.872948    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:19.874999    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 27
	I0725 11:22:19.875014    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:19.875048    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:19.876218    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:19.876258    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:19.876269    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:19.876277    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:19.876289    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:19.876296    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:19.876302    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:19.876311    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:19.876323    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:19.876335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:19.876344    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:19.876352    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:19.876359    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:19.876366    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:19.876373    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:19.876381    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:19.876389    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:19.876395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:19.876401    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:19.876409    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:21.877293    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 28
	I0725 11:22:21.877317    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:21.877383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:21.878132    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:21.878196    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:21.878206    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:21.878214    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:21.878222    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:21.878233    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:21.878244    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:21.878251    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:21.878257    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:21.878267    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:21.878275    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:21.878283    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:21.878299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:21.878311    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:21.878319    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:21.878327    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:21.878334    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:21.878340    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:21.878357    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:21.878369    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:23.879375    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 29
	I0725 11:22:23.879391    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:23.879523    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:23.880414    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 72:a9:ee:21:ad:f in /var/db/dhcpd_leases ...
	I0725 11:22:23.880423    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:22:23.880432    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:22:23.880439    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:22:23.880445    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:22:23.880451    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:22:23.880464    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:22:23.880477    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:22:23.880487    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:22:23.880511    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:22:23.880518    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:22:23.880525    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:22:23.880537    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:22:23.880544    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:22:23.880552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:22:23.880559    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:22:23.880567    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:22:23.880575    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:22:23.880583    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:22:23.880592    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:22:25.881931    6143 client.go:171] duration metric: took 1m0.818598234s to LocalClient.Create
	I0725 11:22:27.884172    6143 start.go:128] duration metric: took 1m2.85319153s to createHost
	I0725 11:22:27.884186    6143 start.go:83] releasing machines lock for "force-systemd-env-531000", held for 1m2.853340465s
	W0725 11:22:27.884201    6143 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:a9:ee:21:ad:f
	I0725 11:22:27.884515    6143 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:22:27.884546    6143 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:22:27.893586    6143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53841
	I0725 11:22:27.894057    6143 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:22:27.894439    6143 main.go:141] libmachine: Using API Version  1
	I0725 11:22:27.894469    6143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:22:27.894767    6143 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:22:27.895185    6143 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:22:27.895211    6143 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:22:27.903940    6143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53843
	I0725 11:22:27.904336    6143 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:22:27.904796    6143 main.go:141] libmachine: Using API Version  1
	I0725 11:22:27.904806    6143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:22:27.905089    6143 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:22:27.905233    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .GetState
	I0725 11:22:27.905335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.905408    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:27.906406    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .DriverName
	I0725 11:22:27.968840    6143 out.go:177] * Deleting "force-systemd-env-531000" in hyperkit ...
	I0725 11:22:27.989775    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .Remove
	I0725 11:22:27.989927    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.989944    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.990010    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:27.990969    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:27.991018    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | waiting for graceful shutdown
	I0725 11:22:28.993137    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:28.993282    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:28.994203    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | waiting for graceful shutdown
	I0725 11:22:29.994721    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:29.994807    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:29.996460    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | waiting for graceful shutdown
	I0725 11:22:30.996941    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:30.997034    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:30.997756    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | waiting for graceful shutdown
	I0725 11:22:31.999887    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:32.000007    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:32.000556    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | waiting for graceful shutdown
	I0725 11:22:33.002321    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:22:33.002385    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6203
	I0725 11:22:33.003461    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | sending sigkill
	I0725 11:22:33.003471    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0725 11:22:33.014991    6143 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:a9:ee:21:ad:f
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:a9:ee:21:ad:f
	I0725 11:22:33.015011    6143 start.go:729] Will try again in 5 seconds ...
	I0725 11:22:33.024200    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:22:33 WARN : hyperkit: failed to read stderr: EOF
	I0725 11:22:33.024230    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:22:33 WARN : hyperkit: failed to read stdout: EOF
	I0725 11:22:38.015764    6143 start.go:360] acquireMachinesLock for force-systemd-env-531000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:23:30.868003    6143 start.go:364] duration metric: took 52.85120353s to acquireMachinesLock for "force-systemd-env-531000"
	I0725 11:23:30.868047    6143 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:23:30.868095    6143 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 11:23:30.910414    6143 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:23:30.910490    6143 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:23:30.910527    6143 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:23:30.919238    6143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53847
	I0725 11:23:30.919562    6143 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:23:30.919921    6143 main.go:141] libmachine: Using API Version  1
	I0725 11:23:30.919940    6143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:23:30.920160    6143 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:23:30.920283    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .GetMachineName
	I0725 11:23:30.920381    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .DriverName
	I0725 11:23:30.920491    6143 start.go:159] libmachine.API.Create for "force-systemd-env-531000" (driver="hyperkit")
	I0725 11:23:30.920511    6143 client.go:168] LocalClient.Create starting
	I0725 11:23:30.920539    6143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 11:23:30.920591    6143 main.go:141] libmachine: Decoding PEM data...
	I0725 11:23:30.920601    6143 main.go:141] libmachine: Parsing certificate...
	I0725 11:23:30.920645    6143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 11:23:30.920685    6143 main.go:141] libmachine: Decoding PEM data...
	I0725 11:23:30.920702    6143 main.go:141] libmachine: Parsing certificate...
	I0725 11:23:30.920734    6143 main.go:141] libmachine: Running pre-create checks...
	I0725 11:23:30.920740    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .PreCreateCheck
	I0725 11:23:30.920821    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:30.920856    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .GetConfigRaw
	I0725 11:23:30.931529    6143 main.go:141] libmachine: Creating machine...
	I0725 11:23:30.931538    6143 main.go:141] libmachine: (force-systemd-env-531000) Calling .Create
	I0725 11:23:30.931638    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:30.931769    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:23:30.931624    6297 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:23:30.931845    6143 main.go:141] libmachine: (force-systemd-env-531000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 11:23:31.255844    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:23:31.255762    6297 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/id_rsa...
	I0725 11:23:31.378280    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:23:31.378179    6297 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk...
	I0725 11:23:31.378299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Writing magic tar header
	I0725 11:23:31.378310    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Writing SSH key tar header
	I0725 11:23:31.378646    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | I0725 11:23:31.378609    6297 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000 ...
	I0725 11:23:31.754475    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:31.754495    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid
	I0725 11:23:31.754552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Using UUID cf745cbd-d264-45c8-8bd6-d39d2f9d1c58
	I0725 11:23:31.780425    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Generated MAC 8e:18:92:88:d7:d5
	I0725 11:23:31.780443    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000
	I0725 11:23:31.780485    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cf745cbd-d264-45c8-8bd6-d39d2f9d1c58", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:23:31.780519    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cf745cbd-d264-45c8-8bd6-d39d2f9d1c58", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:23:31.780577    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "cf745cbd-d264-45c8-8bd6-d39d2f9d1c58", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-sys
temd-env-531000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000"}
	I0725 11:23:31.780616    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U cf745cbd-d264-45c8-8bd6-d39d2f9d1c58 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/force-systemd-env-531000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/bzimage,/Users/jenkins/minikube-integration/19
326-1195/.minikube/machines/force-systemd-env-531000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-531000"
	I0725 11:23:31.780627    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:23:31.783455    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 DEBUG: hyperkit: Pid is 6307
	I0725 11:23:31.783901    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 0
	I0725 11:23:31.783920    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:31.784015    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:31.785242    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:31.785319    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:31.785338    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:31.785369    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:31.785397    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:31.785435    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:31.785455    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:31.785467    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:31.785476    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:31.785485    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:31.785496    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:31.785505    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:31.785514    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:31.785531    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:31.785540    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:31.785563    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:31.785577    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:31.785591    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:31.785606    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:31.785623    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:31.791155    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:23:31.799120    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/force-systemd-env-531000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:23:31.799980    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:23:31.799994    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:23:31.800006    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:23:31.800016    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:23:32.178255    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:23:32.178269    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:23:32.292918    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:23:32.292934    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:23:32.292945    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:23:32.292953    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:23:32.293832    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:23:32.293842    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:23:33.785988    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 1
	I0725 11:23:33.786003    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:33.786031    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:33.786907    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:33.786974    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:33.786989    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:33.787014    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:33.787028    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:33.787041    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:33.787053    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:33.787060    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:33.787068    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:33.787090    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:33.787099    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:33.787106    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:33.787114    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:33.787126    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:33.787132    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:33.787141    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:33.787150    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:33.787157    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:33.787164    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:33.787182    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:35.787812    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 2
	I0725 11:23:35.787829    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:35.787916    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:35.788723    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:35.788792    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:35.788807    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:35.788821    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:35.788829    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:35.788840    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:35.788848    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:35.788858    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:35.788867    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:35.788874    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:35.788881    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:35.788889    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:35.788903    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:35.788915    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:35.788923    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:35.788931    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:35.788938    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:35.788945    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:35.788971    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:35.788988    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:37.698175    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0725 11:23:37.698290    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0725 11:23:37.698299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0725 11:23:37.720454    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | 2024/07/25 11:23:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0725 11:23:37.791122    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 3
	I0725 11:23:37.791150    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:37.791387    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:37.792943    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:37.793083    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:37.793097    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:37.793109    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:37.793117    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:37.793129    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:37.793141    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:37.793152    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:37.793160    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:37.793181    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:37.793193    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:37.793203    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:37.793217    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:37.793229    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:37.793240    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:37.793251    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:37.793263    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:37.793273    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:37.793287    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:37.793299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:39.795187    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 4
	I0725 11:23:39.795203    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:39.795333    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:39.796121    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:39.796171    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:39.796179    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:39.796196    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:39.796211    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:39.796220    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:39.796226    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:39.796232    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:39.796244    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:39.796253    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:39.796269    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:39.796281    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:39.796290    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:39.796298    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:39.796305    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:39.796313    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:39.796321    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:39.796335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:39.796348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:39.796358    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:41.798388    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 5
	I0725 11:23:41.798403    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:41.798512    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:41.799440    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:41.799495    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:41.799506    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:41.799515    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:41.799522    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:41.799530    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:41.799536    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:41.799543    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:41.799548    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:41.799555    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:41.799569    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:41.799579    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:41.799587    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:41.799601    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:41.799612    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:41.799623    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:41.799631    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:41.799638    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:41.799644    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:41.799652    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:43.800055    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 6
	I0725 11:23:43.800072    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:43.800158    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:43.800954    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:43.801000    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:43.801010    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:43.801022    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:43.801029    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:43.801036    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:43.801042    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:43.801051    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:43.801069    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:43.801076    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:43.801082    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:43.801090    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:43.801098    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:43.801114    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:43.801127    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:43.801142    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:43.801150    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:43.801171    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:43.801186    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:43.801202    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:45.801322    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 7
	I0725 11:23:45.801338    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:45.801409    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:45.802276    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:45.802392    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:45.802403    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:45.802410    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:45.802419    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:45.802428    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:45.802435    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:45.802459    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:45.802466    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:45.802473    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:45.802481    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:45.802488    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:45.802495    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:45.802504    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:45.802512    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:45.802526    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:45.802540    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:45.802549    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:45.802555    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:45.802563    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:47.803031    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 8
	I0725 11:23:47.803049    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:47.803066    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:47.803951    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:47.803970    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:47.803986    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:47.803999    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:47.804006    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:47.804015    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:47.804024    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:47.804044    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:47.804060    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:47.804069    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:47.804077    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:47.804085    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:47.804093    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:47.804099    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:47.804106    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:47.804114    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:47.804120    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:47.804128    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:47.804144    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:47.804151    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:49.806211    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 9
	I0725 11:23:49.806229    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:49.806281    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:49.807147    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:49.807188    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:49.807212    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:49.807221    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:49.807230    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:49.807236    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:49.807265    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:49.807280    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:49.807290    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:49.807298    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:49.807304    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:49.807312    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:49.807324    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:49.807332    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:49.807339    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:49.807348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:49.807362    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:49.807373    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:49.807383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:49.807389    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:51.809418    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 10
	I0725 11:23:51.809431    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:51.809565    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:51.810523    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:51.810587    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:51.810602    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:51.810610    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:51.810628    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:51.810656    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:51.810669    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:51.810687    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:51.810694    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:51.810700    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:51.810707    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:51.810715    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:51.810722    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:51.810728    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:51.810740    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:51.810750    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:51.810759    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:51.810777    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:51.810792    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:51.810813    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:53.810945    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 11
	I0725 11:23:53.810961    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:53.811059    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:53.811868    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:53.811911    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:53.811923    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:53.811937    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:53.811947    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:53.811953    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:53.811974    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:53.811989    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:53.811997    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:53.812004    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:53.812020    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:53.812034    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:53.812050    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:53.812062    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:53.812070    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:53.812078    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:53.812085    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:53.812093    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:53.812104    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:53.812114    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:55.812216    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 12
	I0725 11:23:55.812229    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:55.812294    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:55.813267    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:55.813315    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:55.813325    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:55.813335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:55.813343    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:55.813350    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:55.813356    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:55.813363    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:55.813375    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:55.813389    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:55.813398    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:55.813415    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:55.813428    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:55.813436    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:55.813444    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:55.813451    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:55.813459    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:55.813466    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:55.813479    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:55.813488    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:57.815510    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 13
	I0725 11:23:57.815526    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:57.815595    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:57.816383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:57.816412    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:57.816431    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:57.816446    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:57.816454    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:57.816461    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:57.816469    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:57.816481    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:57.816489    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:57.816497    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:57.816504    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:57.816521    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:57.816528    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:57.816534    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:57.816541    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:57.816549    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:57.816555    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:57.816563    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:57.816570    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:57.816577    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:23:59.817351    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 14
	I0725 11:23:59.817363    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:23:59.817425    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:23:59.818230    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:23:59.818299    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:23:59.818315    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:23:59.818340    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:23:59.818350    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:23:59.818357    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:23:59.818365    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:23:59.818372    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:23:59.818378    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:23:59.818392    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:23:59.818404    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:23:59.818413    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:23:59.818421    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:23:59.818433    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:23:59.818443    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:23:59.818453    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:23:59.818462    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:23:59.818469    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:23:59.818475    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:23:59.818483    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:01.819761    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 15
	I0725 11:24:01.819777    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:01.819812    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:01.820771    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:01.820820    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:01.820832    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:01.820844    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:01.820851    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:01.820862    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:01.820869    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:01.820875    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:01.820883    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:01.820891    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:01.820897    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:01.820904    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:01.820917    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:01.820925    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:01.820932    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:01.820949    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:01.820961    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:01.820971    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:01.820978    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:01.820987    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:03.823012    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 16
	I0725 11:24:03.823029    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:03.823070    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:03.823850    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:03.823897    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:03.823905    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:03.823927    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:03.823936    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:03.823943    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:03.823950    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:03.823956    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:03.823963    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:03.823972    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:03.823984    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:03.823993    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:03.824000    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:03.824008    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:03.824025    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:03.824040    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:03.824047    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:03.824055    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:03.824062    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:03.824070    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:05.826112    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 17
	I0725 11:24:05.826125    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:05.826202    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:05.827012    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:05.827054    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:05.827062    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:05.827071    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:05.827082    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:05.827099    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:05.827110    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:05.827129    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:05.827144    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:05.827152    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:05.827158    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:05.827167    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:05.827175    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:05.827182    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:05.827190    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:05.827197    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:05.827205    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:05.827211    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:05.827219    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:05.827237    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:07.829287    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 18
	I0725 11:24:07.829302    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:07.829335    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:07.830276    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:07.830323    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:07.830336    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:07.830348    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:07.830363    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:07.830371    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:07.830377    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:07.830383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:07.830389    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:07.830395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:07.830402    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:07.830408    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:07.830414    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:07.830422    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:07.830430    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:07.830446    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:07.830459    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:07.830471    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:07.830479    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:07.830488    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:09.832529    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 19
	I0725 11:24:09.832548    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:09.832594    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:09.833433    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:09.833493    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:09.833505    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:09.833524    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:09.833535    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:09.833550    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:09.833560    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:09.833567    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:09.833575    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:09.833590    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:09.833603    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:09.833610    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:09.833618    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:09.833632    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:09.833643    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:09.833651    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:09.833658    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:09.833666    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:09.833674    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:09.833694    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:11.835503    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 20
	I0725 11:24:11.835523    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:11.835593    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:11.836401    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:11.836446    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:11.836466    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:11.836499    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:11.836510    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:11.836519    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:11.836525    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:11.836539    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:11.836552    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:11.836560    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:11.836568    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:11.836576    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:11.836583    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:11.836590    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:11.836598    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:11.836610    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:11.836618    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:11.836660    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:11.836703    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:11.836747    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:13.838695    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 21
	I0725 11:24:13.838710    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:13.838789    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:13.839660    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:13.839699    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:13.839707    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:13.839717    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:13.839725    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:13.839741    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:13.839758    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:13.839766    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:13.839772    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:13.839782    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:13.839788    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:13.839794    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:13.839800    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:13.839806    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:13.839813    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:13.839821    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:13.839840    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:13.839850    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:13.839861    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:13.839870    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:15.841124    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 22
	I0725 11:24:15.841140    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:15.841214    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:15.842003    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:15.842045    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:15.842052    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:15.842072    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:15.842083    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:15.842094    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:15.842103    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:15.842111    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:15.842118    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:15.842133    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:15.842145    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:15.842154    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:15.842162    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:15.842171    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:15.842180    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:15.842187    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:15.842206    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:15.842212    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:15.842219    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:15.842227    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:17.843454    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 23
	I0725 11:24:17.843468    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:17.843520    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:17.844539    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:17.844587    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:17.844600    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:17.844617    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:17.844632    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:17.844649    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:17.844672    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:17.844680    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:17.844687    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:17.844694    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:17.844703    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:17.844717    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:17.844727    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:17.844735    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:17.844741    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:17.844753    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:17.844768    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:17.844778    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:17.844786    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:17.844799    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:19.846781    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 24
	I0725 11:24:19.846797    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:19.846834    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:19.847668    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:19.847709    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:19.847719    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:19.847741    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:19.847752    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:19.847764    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:19.847780    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:19.847789    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:19.847797    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:19.847804    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:19.847811    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:19.847823    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:19.847833    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:19.847843    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:19.847851    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:19.847858    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:19.847865    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:19.847883    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:19.847899    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:19.847909    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:21.849918    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 25
	I0725 11:24:21.849939    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:21.850012    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:21.850887    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:21.850938    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:21.850956    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:21.850966    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:21.850976    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:21.850982    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:21.850995    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:21.851021    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:21.851039    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:21.851050    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:21.851058    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:21.851064    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:21.851070    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:21.851078    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:21.851086    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:21.851101    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:21.851113    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:21.851122    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:21.851128    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:21.851134    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:23.853165    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 26
	I0725 11:24:23.853177    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:23.853253    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:23.854047    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:23.854103    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:23.854113    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:23.854135    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:23.854145    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:23.854155    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:23.854165    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:23.854181    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:23.854196    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:23.854209    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:23.854218    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:23.854225    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:23.854234    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:23.854240    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:23.854248    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:23.854254    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:23.854260    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:23.854279    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:23.854294    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:23.854304    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:25.854412    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 27
	I0725 11:24:25.854427    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:25.854501    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:25.855292    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:25.855352    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:25.855365    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:25.855385    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:25.855395    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:25.855410    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:25.855420    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:25.855428    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:25.855436    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:25.855443    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:25.855451    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:25.855467    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:25.855475    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:25.855483    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:25.855490    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:25.855496    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:25.855504    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:25.855518    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:25.855531    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:25.855543    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:27.857589    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 28
	I0725 11:24:27.857605    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:27.857645    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:27.858624    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:27.858665    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:27.858676    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:27.858686    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:27.858701    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:27.858708    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:27.858715    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:27.858721    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:27.858730    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:27.858737    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:27.858744    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:27.858750    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:27.858757    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:27.858764    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:27.858770    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:27.858788    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:27.858801    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:27.858817    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:27.858830    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:27.858839    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:29.860367    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Attempt 29
	I0725 11:24:29.860383    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:24:29.860425    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | hyperkit pid from json: 6307
	I0725 11:24:29.861223    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Searching for 8e:18:92:88:d7:d5 in /var/db/dhcpd_leases ...
	I0725 11:24:29.861241    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0725 11:24:29.861248    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 11:24:29.861254    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 11:24:29.861260    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 11:24:29.861268    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 11:24:29.861275    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 11:24:29.861282    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 11:24:29.861288    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 11:24:29.861294    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 11:24:29.861309    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 11:24:29.861322    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 11:24:29.861337    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 11:24:29.861373    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 11:24:29.861382    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 11:24:29.861390    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 11:24:29.861398    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 11:24:29.861405    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 11:24:29.861415    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 11:24:29.861438    6143 main.go:141] libmachine: (force-systemd-env-531000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 11:24:31.862208    6143 client.go:171] duration metric: took 1m0.94056938s to LocalClient.Create
	I0725 11:24:33.863546    6143 start.go:128] duration metric: took 1m2.99426059s to createHost
	I0725 11:24:33.863574    6143 start.go:83] releasing machines lock for "force-systemd-env-531000", held for 1m2.994378568s
	W0725 11:24:33.863655    6143 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-531000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:18:92:88:d7:d5
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-531000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:18:92:88:d7:d5
	I0725 11:24:33.885173    6143 out.go:177] 
	W0725 11:24:33.926826    6143 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:18:92:88:d7:d5
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:18:92:88:d7:d5
	W0725 11:24:33.926842    6143 out.go:239] * 
	* 
	W0725 11:24:33.927515    6143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:24:33.989834    6143 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-531000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-531000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-531000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (176.945456ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-531000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-531000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-25 11:24:34.286061 -0700 PDT m=+3391.337787565
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-531000 -n force-systemd-env-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-531000 -n force-systemd-env-531000: exit status 7 (77.344158ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:24:34.361539    6367 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:24:34.361561    6367 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-531000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-531000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-531000: (5.246273421s)
--- FAIL: TestForceSystemdEnv (233.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (226.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-485000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-485000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-485000 -v=7 --alsologtostderr: (27.073391546s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-485000 --wait=true -v=7 --alsologtostderr
E0725 10:49:08.436171    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:51:24.585419    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-485000 --wait=true -v=7 --alsologtostderr: exit status 90 (3m15.032219326s)

                                                
                                                
-- stdout --
	* [ha-485000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-485000" primary control-plane node in "ha-485000" cluster
	* Restarting existing hyperkit VM for "ha-485000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	* Enabled addons: 
	
	* Starting "ha-485000-m02" control-plane node in "ha-485000" cluster
	* Restarting existing hyperkit VM for "ha-485000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-485000-m03" control-plane node in "ha-485000" cluster
	* Restarting existing hyperkit VM for "ha-485000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-485000-m04" worker node in "ha-485000" cluster
	* Restarting existing hyperkit VM for "ha-485000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:48:26.008505    3774 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:48:26.008703    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008708    3774 out.go:304] Setting ErrFile to fd 2...
	I0725 10:48:26.008712    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008889    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:48:26.010330    3774 out.go:298] Setting JSON to false
	I0725 10:48:26.034230    3774 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2876,"bootTime":1721926830,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:48:26.034337    3774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:48:26.057780    3774 out.go:177] * [ha-485000] minikube v1.33.1 on Darwin 14.5
	I0725 10:48:26.099403    3774 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:48:26.099443    3774 notify.go:220] Checking for updates...
	I0725 10:48:26.142252    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:26.163519    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:48:26.184535    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:48:26.205465    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 10:48:26.226618    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:48:26.248320    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:26.248484    3774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:48:26.249112    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.249222    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.258893    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51885
	I0725 10:48:26.259439    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.260047    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.260058    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.260427    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.260644    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.289300    3774 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 10:48:26.331665    3774 start.go:297] selected driver: hyperkit
	I0725 10:48:26.331692    3774 start.go:901] validating driver "hyperkit" against &{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.331911    3774 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:48:26.332099    3774 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.332295    3774 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:48:26.342212    3774 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:48:26.348291    3774 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.348316    3774 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:48:26.351632    3774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:48:26.351670    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:26.351677    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:26.351755    3774 start.go:340] cluster config:
	{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.351859    3774 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.394511    3774 out.go:177] * Starting "ha-485000" primary control-plane node in "ha-485000" cluster
	I0725 10:48:26.415566    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:26.415642    3774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 10:48:26.415668    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:26.415915    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:26.415934    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:26.416129    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.417069    3774 start.go:360] acquireMachinesLock for ha-485000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:26.417183    3774 start.go:364] duration metric: took 90.924µs to acquireMachinesLock for "ha-485000"
	I0725 10:48:26.417209    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:26.417242    3774 fix.go:54] fixHost starting: 
	I0725 10:48:26.417573    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.417601    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.426437    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51887
	I0725 10:48:26.426806    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.427140    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.427151    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.427362    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.427487    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.427621    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:48:26.427738    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.427816    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3271
	I0725 10:48:26.428722    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.428789    3774 fix.go:112] recreateIfNeeded on ha-485000: state=Stopped err=<nil>
	I0725 10:48:26.428816    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	W0725 10:48:26.428913    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:26.450263    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000" ...
	I0725 10:48:26.492478    3774 main.go:141] libmachine: (ha-485000) Calling .Start
	I0725 10:48:26.492777    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.492834    3774 main.go:141] libmachine: (ha-485000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid
	I0725 10:48:26.494964    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.494992    3774 main.go:141] libmachine: (ha-485000) DBG | pid 3271 is in state "Stopped"
	I0725 10:48:26.495011    3774 main.go:141] libmachine: (ha-485000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid...
	I0725 10:48:26.495351    3774 main.go:141] libmachine: (ha-485000) DBG | Using UUID 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3
	I0725 10:48:26.602890    3774 main.go:141] libmachine: (ha-485000) DBG | Generated MAC 52:76:82:a1:51:13
	I0725 10:48:26.602911    3774 main.go:141] libmachine: (ha-485000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:26.603041    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603067    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603128    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:26.603166    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:26.603183    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:26.604450    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Pid is 3787
	I0725 10:48:26.604799    3774 main.go:141] libmachine: (ha-485000) DBG | Attempt 0
	I0725 10:48:26.604824    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.604870    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:48:26.606553    3774 main.go:141] libmachine: (ha-485000) DBG | Searching for 52:76:82:a1:51:13 in /var/db/dhcpd_leases ...
	I0725 10:48:26.606607    3774 main.go:141] libmachine: (ha-485000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:26.606642    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:26.606660    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:26.606672    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:48:26.606684    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e019}
	I0725 10:48:26.606696    3774 main.go:141] libmachine: (ha-485000) DBG | Found match: 52:76:82:a1:51:13
	I0725 10:48:26.606707    3774 main.go:141] libmachine: (ha-485000) DBG | IP: 192.169.0.5
	I0725 10:48:26.606731    3774 main.go:141] libmachine: (ha-485000) Calling .GetConfigRaw
	I0725 10:48:26.607371    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:26.607542    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.608260    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:26.608270    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.608385    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:26.608483    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:26.608567    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608654    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608755    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:26.608878    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:26.609107    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:26.609118    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:26.612320    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:26.665658    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:26.666425    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:26.666446    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:26.666486    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:26.666502    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.049138    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:27.049167    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:27.163716    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:27.163734    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:27.163745    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:27.163771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.164666    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:27.164679    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:32.750889    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:32.750945    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:32.750956    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:32.776771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:37.667735    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:37.667749    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.667914    3774 buildroot.go:166] provisioning hostname "ha-485000"
	I0725 10:48:37.667925    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.668027    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.668112    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.668192    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668288    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668362    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.668500    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.668656    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.668664    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000 && echo "ha-485000" | sudo tee /etc/hostname
	I0725 10:48:37.727283    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000
	
	I0725 10:48:37.727299    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.727438    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.727532    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727625    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727717    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.727855    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.727982    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.727993    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:37.785056    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:37.785076    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:37.785094    3774 buildroot.go:174] setting up certificates
	I0725 10:48:37.785101    3774 provision.go:84] configureAuth start
	I0725 10:48:37.785108    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.785245    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:37.785333    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.785431    3774 provision.go:143] copyHostCerts
	I0725 10:48:37.785463    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785523    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:37.785532    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785675    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:37.785906    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.785936    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:37.785941    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.786010    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:37.786157    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786185    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:37.786190    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786257    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:37.786415    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000 san=[127.0.0.1 192.169.0.5 ha-485000 localhost minikube]
	I0725 10:48:37.823550    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:37.823600    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:37.823615    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.823730    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.823832    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.823929    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.824016    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:37.858513    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:37.858593    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0725 10:48:37.877705    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:37.877768    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:37.897239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:37.897295    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:37.916551    3774 provision.go:87] duration metric: took 131.43608ms to configureAuth
	I0725 10:48:37.916563    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:37.916723    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:37.916742    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:37.916888    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.916985    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.917074    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917167    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917240    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.917351    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.917476    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.917483    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:37.966249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:37.966266    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:37.966344    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:37.966356    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.966476    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.966563    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966659    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966744    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.966879    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.967017    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.967059    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:38.025932    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:38.025951    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:38.026084    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:38.026186    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026311    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026409    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:38.026537    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:38.026681    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:38.026694    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:39.678604    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:39.678619    3774 machine.go:97] duration metric: took 13.070176391s to provisionDockerMachine
	I0725 10:48:39.678630    3774 start.go:293] postStartSetup for "ha-485000" (driver="hyperkit")
	I0725 10:48:39.678637    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:39.678650    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.678827    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:39.678844    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.678949    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.679038    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.679143    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.679235    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.716368    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:39.720567    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:39.720581    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:39.720675    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:39.720817    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:39.720823    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:39.720982    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:39.729186    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:39.758308    3774 start.go:296] duration metric: took 79.656539ms for postStartSetup
	I0725 10:48:39.758333    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.758515    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:39.758527    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.758630    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.758718    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.758821    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.758909    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.790766    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:39.790818    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:39.841417    3774 fix.go:56] duration metric: took 13.423999639s for fixHost
	I0725 10:48:39.841437    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.841579    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.841669    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841753    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841830    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.841969    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:39.842111    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:39.842118    3774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 10:48:39.893711    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929719.869493557
	
	I0725 10:48:39.893723    3774 fix.go:216] guest clock: 1721929719.869493557
	I0725 10:48:39.893729    3774 fix.go:229] Guest: 2024-07-25 10:48:39.869493557 -0700 PDT Remote: 2024-07-25 10:48:39.841427 -0700 PDT m=+13.869378775 (delta=28.066557ms)
	I0725 10:48:39.893749    3774 fix.go:200] guest clock delta is within tolerance: 28.066557ms
	I0725 10:48:39.893753    3774 start.go:83] releasing machines lock for "ha-485000", held for 13.476381445s
	I0725 10:48:39.893772    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.893900    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:39.894007    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894332    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894445    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894524    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:39.894561    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894577    3774 ssh_runner.go:195] Run: cat /version.json
	I0725 10:48:39.894588    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894652    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894680    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894746    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894757    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894831    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894854    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894930    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.894951    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.969105    3774 ssh_runner.go:195] Run: systemctl --version
	I0725 10:48:39.974344    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 10:48:39.978550    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:39.978588    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:39.992374    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:39.992386    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:39.992494    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.010041    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:40.018981    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:40.027827    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.027880    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:40.036849    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.045802    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:40.054565    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.063403    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:40.072492    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:40.081289    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:40.089964    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:40.098883    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:40.106915    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:40.114912    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.213735    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:40.232850    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:40.232927    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:40.247072    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.260328    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:40.277505    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.288634    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.302282    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:40.323941    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.334346    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.349841    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:40.352851    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:40.359956    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:40.373249    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:40.468940    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:40.562165    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.562232    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:40.576420    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.666619    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:48:42.973495    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306818339s)
	I0725 10:48:42.973567    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:48:42.984136    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:48:42.997023    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.007459    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:48:43.101460    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:48:43.206558    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.318643    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:48:43.332235    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.343300    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.439386    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:48:43.504079    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:48:43.504167    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:48:43.509100    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:48:43.509160    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:48:43.514298    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:48:43.540285    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:48:43.540359    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.556856    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.619142    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:48:43.619193    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:43.619596    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:48:43.624261    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.634147    3774 kubeadm.go:883] updating cluster {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 10:48:43.634230    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:43.634284    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.648086    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.648098    3774 docker.go:615] Images already preloaded, skipping extraction
	I0725 10:48:43.648178    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.661887    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.661905    3774 cache_images.go:84] Images are preloaded, skipping loading
	I0725 10:48:43.661914    3774 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0725 10:48:43.661994    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:48:43.662065    3774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 10:48:43.699921    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:43.699936    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:43.699949    3774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 10:48:43.699966    3774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-485000 NodeName:ha-485000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 10:48:43.700056    3774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-485000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 10:48:43.700077    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:48:43.700127    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:48:43.712809    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:48:43.712873    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:48:43.712925    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:48:43.721182    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:48:43.721226    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 10:48:43.728575    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0725 10:48:43.742374    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:48:43.755567    3774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 10:48:43.769800    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:48:43.783433    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:48:43.786504    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.795954    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.896290    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:48:43.910403    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.5
	I0725 10:48:43.910418    3774 certs.go:194] generating shared ca certs ...
	I0725 10:48:43.910428    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:43.910590    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:48:43.910647    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:48:43.910658    3774 certs.go:256] generating profile certs ...
	I0725 10:48:43.910746    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:48:43.910769    3774 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f
	I0725 10:48:43.910786    3774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0725 10:48:44.010960    3774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f ...
	I0725 10:48:44.010977    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f: {Name:mka1c7bb5889cefec4fa34bda59b0dccc014b849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011374    3774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f ...
	I0725 10:48:44.011384    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f: {Name:mk2a7443f9ec44bdbab1eccd742bb8d7bd46104e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011591    3774 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt
	I0725 10:48:44.011796    3774 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key
	I0725 10:48:44.012023    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:48:44.012033    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:48:44.012056    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:48:44.012075    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:48:44.012095    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:48:44.012113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:48:44.012132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:48:44.012152    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:48:44.012170    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:48:44.012249    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:48:44.012300    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:48:44.012308    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:48:44.012345    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:48:44.012379    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:48:44.012417    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:48:44.012485    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:44.012517    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.012537    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.012555    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.013001    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:48:44.040701    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:48:44.077388    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:48:44.112787    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:48:44.159876    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:48:44.190098    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:48:44.210542    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:48:44.230450    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:48:44.251339    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:48:44.271102    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:48:44.290804    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:48:44.310754    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 10:48:44.324090    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:48:44.328453    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:48:44.336951    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340313    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340348    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.344548    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:48:44.352831    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:48:44.360980    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364473    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364507    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.368814    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:48:44.377043    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:48:44.385344    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388809    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388844    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.393238    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:48:44.401504    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:48:44.404983    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:48:44.409808    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:48:44.414092    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:48:44.418841    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:48:44.423109    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:48:44.427402    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:48:44.432185    3774 kubeadm.go:392] StartCluster: {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:44.432302    3774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 10:48:44.448953    3774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 10:48:44.456669    3774 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 10:48:44.456681    3774 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 10:48:44.456727    3774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 10:48:44.465243    3774 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:48:44.465557    3774 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-485000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.465649    3774 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1195/kubeconfig needs updating (will repair): [kubeconfig missing "ha-485000" cluster setting kubeconfig missing "ha-485000" context setting]
	I0725 10:48:44.465837    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.466249    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.466441    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 10:48:44.466804    3774 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 10:48:44.466963    3774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 10:48:44.474340    3774 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0725 10:48:44.474351    3774 kubeadm.go:597] duration metric: took 17.665834ms to restartPrimaryControlPlane
	I0725 10:48:44.474370    3774 kubeadm.go:394] duration metric: took 42.188275ms to StartCluster
	I0725 10:48:44.474382    3774 settings.go:142] acquiring lock: {Name:mk4f7e43bf5353228d4c27f1f08450065f65cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.474454    3774 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.474852    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.475072    3774 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:48:44.475085    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:48:44.475106    3774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 10:48:44.475233    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.518490    3774 out.go:177] * Enabled addons: 
	I0725 10:48:44.539102    3774 addons.go:510] duration metric: took 64.005371ms for enable addons: enabled=[]
	I0725 10:48:44.539140    3774 start.go:246] waiting for cluster config update ...
	I0725 10:48:44.539164    3774 start.go:255] writing updated cluster config ...
	I0725 10:48:44.561436    3774 out.go:177] 
	I0725 10:48:44.582967    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.583098    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.605394    3774 out.go:177] * Starting "ha-485000-m02" control-plane node in "ha-485000" cluster
	I0725 10:48:44.647510    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:44.647544    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:44.647720    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:44.647738    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:44.647870    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.648902    3774 start.go:360] acquireMachinesLock for ha-485000-m02: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:44.649015    3774 start.go:364] duration metric: took 81.917µs to acquireMachinesLock for "ha-485000-m02"
	I0725 10:48:44.649041    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:44.649050    3774 fix.go:54] fixHost starting: m02
	I0725 10:48:44.649495    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:44.649529    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:44.659031    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51909
	I0725 10:48:44.659557    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:44.659989    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:44.660004    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:44.660364    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:44.660504    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.660702    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:48:44.660973    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.661074    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3731
	I0725 10:48:44.661956    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.662000    3774 fix.go:112] recreateIfNeeded on ha-485000-m02: state=Stopped err=<nil>
	I0725 10:48:44.662009    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	W0725 10:48:44.662092    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:44.704559    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m02" ...
	I0725 10:48:44.726283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .Start
	I0725 10:48:44.726558    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.726661    3774 main.go:141] libmachine: (ha-485000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid
	I0725 10:48:44.728373    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.728390    3774 main.go:141] libmachine: (ha-485000-m02) DBG | pid 3731 is in state "Stopped"
	I0725 10:48:44.728407    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid...
	I0725 10:48:44.728847    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Using UUID 528f0647-a045-4ab7-922b-886237fb4fc4
	I0725 10:48:44.756033    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Generated MAC c2:64:80:a8:d2:48
	I0725 10:48:44.756067    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:44.756191    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756227    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756275    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "528f0647-a045-4ab7-922b-886237fb4fc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:44.756319    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 528f0647-a045-4ab7-922b-886237fb4fc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:44.756334    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:44.757674    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Pid is 3792
	I0725 10:48:44.758132    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Attempt 0
	I0725 10:48:44.758146    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.758210    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3792
	I0725 10:48:44.759852    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Searching for c2:64:80:a8:d2:48 in /var/db/dhcpd_leases ...
	I0725 10:48:44.759913    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:44.759930    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:48:44.759945    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:44.759953    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:44.759960    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found match: c2:64:80:a8:d2:48
	I0725 10:48:44.759970    3774 main.go:141] libmachine: (ha-485000-m02) DBG | IP: 192.169.0.6
	I0725 10:48:44.759997    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetConfigRaw
	I0725 10:48:44.760701    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:44.760893    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.761371    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:44.761383    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.761484    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:44.761567    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:44.761671    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761791    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761906    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:44.762039    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:44.762188    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:44.762196    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:44.765251    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:44.773188    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:44.774148    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:44.774173    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:44.774196    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:44.774224    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.156825    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:45.156838    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:45.271856    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:45.271872    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:45.271881    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:45.271892    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.272766    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:45.272776    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:50.885003    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:50.885077    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:50.885089    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:50.908756    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:55.821181    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:55.821195    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821334    3774 buildroot.go:166] provisioning hostname "ha-485000-m02"
	I0725 10:48:55.821345    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821436    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.821525    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.821602    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821685    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821770    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.821916    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.822063    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.822074    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m02 && echo "ha-485000-m02" | sudo tee /etc/hostname
	I0725 10:48:55.882249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m02
	
	I0725 10:48:55.882268    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.882410    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.882498    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882588    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882688    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.882825    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.883013    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.883027    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:55.939117    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:55.939132    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:55.939142    3774 buildroot.go:174] setting up certificates
	I0725 10:48:55.939148    3774 provision.go:84] configureAuth start
	I0725 10:48:55.939154    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.939283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:55.939381    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.939461    3774 provision.go:143] copyHostCerts
	I0725 10:48:55.939491    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939543    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:55.939549    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939688    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:55.939893    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.939923    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:55.939928    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.940045    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:55.940199    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940230    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:55.940235    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940305    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:55.940447    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m02 san=[127.0.0.1 192.169.0.6 ha-485000-m02 localhost minikube]
	I0725 10:48:56.088970    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:56.089020    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:56.089034    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.089186    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.089282    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.089402    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.089501    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:56.122259    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:56.122325    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:56.141398    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:56.141472    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:48:56.160336    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:56.160401    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:56.179351    3774 provision.go:87] duration metric: took 240.193399ms to configureAuth
	I0725 10:48:56.179364    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:56.179528    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:56.179541    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:56.179672    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.179753    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.179827    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179907    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.180095    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.180218    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.180226    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:56.231701    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:56.231712    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:56.231785    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:56.231798    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.231926    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.232020    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232113    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232213    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.232352    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.232487    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.232547    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:56.292824    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:56.292843    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.292983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.293079    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293173    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293276    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.293398    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.293536    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.293548    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:57.936294    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:57.936308    3774 machine.go:97] duration metric: took 13.174752883s to provisionDockerMachine
	I0725 10:48:57.936315    3774 start.go:293] postStartSetup for "ha-485000-m02" (driver="hyperkit")
	I0725 10:48:57.936322    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:57.936333    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:57.936508    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:57.936520    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:57.936625    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:57.936725    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:57.936811    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:57.936919    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:57.973264    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:57.978182    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:57.978195    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:57.978293    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:57.978433    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:57.978439    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:57.978595    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:57.987699    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:58.019591    3774 start.go:296] duration metric: took 83.266386ms for postStartSetup
	I0725 10:48:58.019613    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.019795    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:58.019808    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.019904    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.019990    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.020087    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.020182    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.051727    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:58.051783    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:58.105518    3774 fix.go:56] duration metric: took 13.45628652s for fixHost
	I0725 10:48:58.105546    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.105686    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.105772    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105857    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105932    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.106046    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:58.106195    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:58.106205    3774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 10:48:58.159243    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929738.065939069
	
	I0725 10:48:58.159255    3774 fix.go:216] guest clock: 1721929738.065939069
	I0725 10:48:58.159260    3774 fix.go:229] Guest: 2024-07-25 10:48:58.065939069 -0700 PDT Remote: 2024-07-25 10:48:58.105535 -0700 PDT m=+32.133243684 (delta=-39.595931ms)
	I0725 10:48:58.159284    3774 fix.go:200] guest clock delta is within tolerance: -39.595931ms
	I0725 10:48:58.159289    3774 start.go:83] releasing machines lock for "ha-485000-m02", held for 13.510083839s
	I0725 10:48:58.159306    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.159443    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:58.185128    3774 out.go:177] * Found network options:
	I0725 10:48:58.204774    3774 out.go:177]   - NO_PROXY=192.169.0.5
	W0725 10:48:58.225878    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.225912    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226598    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226812    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226934    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:58.226975    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	W0725 10:48:58.227058    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.227182    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:48:58.227214    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.227277    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227471    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227493    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227612    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227649    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227722    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.227752    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227862    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	W0725 10:48:58.255968    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:58.256032    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:58.313620    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:58.313643    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.313758    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.330047    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:58.338245    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:58.346310    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.346349    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:58.354315    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.362619    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:58.370851    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.379085    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:58.387426    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:58.395620    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:58.403886    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:58.412116    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:58.419752    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:58.427324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.523289    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:58.542645    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.542713    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:58.555600    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.573132    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:58.586107    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.596266    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.606623    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:58.626833    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.637094    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.651924    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:58.654935    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:58.662286    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:58.675716    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:58.765779    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:58.866546    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.866576    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:58.880570    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.988028    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:01.326397    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338319421s)
	I0725 10:49:01.326462    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:01.336948    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:49:01.349778    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.360626    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:01.455101    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:01.569356    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.667972    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:49:01.681113    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.691490    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.801249    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:49:01.864595    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:49:01.864666    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:49:01.869013    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:49:01.869064    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:49:01.872470    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:49:01.897402    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:49:01.897474    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.915840    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.955682    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:49:01.997327    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:49:02.018327    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:49:02.018733    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:49:02.023070    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.032355    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:49:02.032524    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.032743    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.032766    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.041277    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51931
	I0725 10:49:02.041655    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.041981    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.041992    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.042211    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.042328    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:49:02.042405    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:02.042478    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:49:02.043429    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:49:02.043673    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.043701    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.052045    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51933
	I0725 10:49:02.052388    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.052739    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.052755    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.052955    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.053107    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:49:02.053221    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.6
	I0725 10:49:02.053230    3774 certs.go:194] generating shared ca certs ...
	I0725 10:49:02.053241    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:49:02.053422    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:49:02.053492    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:49:02.053502    3774 certs.go:256] generating profile certs ...
	I0725 10:49:02.053609    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:49:02.053685    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.71a9457c
	I0725 10:49:02.053735    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:49:02.053742    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:49:02.053762    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:49:02.053782    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:49:02.053800    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:49:02.053818    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:49:02.053836    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:49:02.053855    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:49:02.053873    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:49:02.053951    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:49:02.054004    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:49:02.054013    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:49:02.054048    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:49:02.054088    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:49:02.054118    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:49:02.054190    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:02.054224    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.054248    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.054268    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.054296    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:49:02.054399    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:49:02.054491    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:49:02.054572    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:49:02.054658    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:49:02.079258    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0725 10:49:02.082822    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:49:02.090699    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0725 10:49:02.093745    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:49:02.101464    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:49:02.104537    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:49:02.112169    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:49:02.115278    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:49:02.123703    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:49:02.126716    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:49:02.134446    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0725 10:49:02.137824    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:49:02.146212    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:49:02.166591    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:49:02.186453    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:49:02.205945    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:49:02.225778    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:49:02.245674    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:49:02.266075    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:49:02.286003    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:49:02.305311    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:49:02.325216    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:49:02.345019    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:49:02.365056    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:49:02.378609    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:49:02.392247    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:49:02.405745    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:49:02.419356    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:49:02.432750    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:49:02.446244    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:49:02.459911    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:49:02.464066    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:49:02.472406    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475732    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475780    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.479985    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:49:02.488332    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:49:02.496582    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.499979    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.500026    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.504179    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:49:02.513038    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:49:02.521433    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524771    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524804    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.528889    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:49:02.537109    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:49:02.540476    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:49:02.544803    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:49:02.548989    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:49:02.553131    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:49:02.557276    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:49:02.561375    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:49:02.565522    3774 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0725 10:49:02.565585    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:49:02.565604    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:49:02.565637    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:49:02.578066    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:49:02.578105    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:49:02.578150    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:49:02.585892    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:49:02.585944    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:49:02.593082    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:49:02.606549    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:49:02.620275    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:49:02.633644    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:49:02.636442    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.645688    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.737901    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.753131    3774 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:49:02.753310    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.774683    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:49:02.795324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.913425    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.928242    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:49:02.928448    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:49:02.928483    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:49:02.928641    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:02.928720    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:02.928725    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:02.928733    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:02.928736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.586324    3774 round_trippers.go:574] Response Status: 200 OK in 9657 milliseconds
	I0725 10:49:12.588091    3774 node_ready.go:49] node "ha-485000-m02" has status "Ready":"True"
	I0725 10:49:12.588104    3774 node_ready.go:38] duration metric: took 9.659318554s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:12.588113    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:12.588160    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:12.588167    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.588173    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.588177    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.640690    3774 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0725 10:49:12.646847    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.646903    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:49:12.646909    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.646915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.646917    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.650764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.651266    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.651274    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.651280    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.651283    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.655046    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.655360    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.655369    3774 pod_ready.go:81] duration metric: took 8.506318ms for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655377    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655414    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:49:12.655418    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.655424    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.655428    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.657266    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.657799    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.657806    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.657811    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.657815    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.659256    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.659713    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.659721    3774 pod_ready.go:81] duration metric: took 4.339404ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659728    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659764    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:49:12.659769    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.659775    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.659779    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.661249    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.661658    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.661665    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.661671    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.661674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.662972    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.663349    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.663356    3774 pod_ready.go:81] duration metric: took 3.624252ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663362    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:49:12.663402    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.663407    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.663412    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.665923    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:12.666288    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:12.666295    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.666300    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.666303    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.667727    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.668096    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.668110    3774 pod_ready.go:81] duration metric: took 4.73801ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668116    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:49:12.668151    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.668156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.668160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.669612    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.789876    3774 request.go:629] Waited for 119.652546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789922    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789939    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.789951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.789958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.792981    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.793487    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.793499    3774 pod_ready.go:81] duration metric: took 125.375312ms for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.793518    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.989031    3774 request.go:629] Waited for 195.453141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989166    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.989181    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.989188    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.991803    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.188209    3774 request.go:629] Waited for 195.602163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188275    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188289    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.188295    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.188299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.190503    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.190953    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.190962    3774 pod_ready.go:81] duration metric: took 397.432093ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.190969    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.390284    3774 request.go:629] Waited for 199.267222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390387    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390398    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.390409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.390414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.393500    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.589029    3774 request.go:629] Waited for 194.741072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589060    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589065    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.589074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.589108    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.591724    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.592136    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.592146    3774 pod_ready.go:81] duration metric: took 401.165409ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.592153    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.789047    3774 request.go:629] Waited for 196.700248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789110    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789120    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.789143    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.789150    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.792302    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.988478    3774 request.go:629] Waited for 195.547657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988571    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.988590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.988601    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.991591    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.992155    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.992168    3774 pod_ready.go:81] duration metric: took 400.004283ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.992177    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.188856    3774 request.go:629] Waited for 196.606719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189024    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189035    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.189046    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.189056    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.192692    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.388336    3774 request.go:629] Waited for 195.165082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388469    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388481    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.388492    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.388500    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.392008    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.392470    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.392479    3774 pod_ready.go:81] duration metric: took 400.290042ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.392486    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.589221    3774 request.go:629] Waited for 196.680325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589265    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.589271    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.589276    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.591675    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:14.788243    3774 request.go:629] Waited for 196.189639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788314    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788319    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.788325    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.788338    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.790264    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:14.790682    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.790691    3774 pod_ready.go:81] duration metric: took 398.194597ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.790698    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.989573    3774 request.go:629] Waited for 198.821418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989634    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989642    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.989650    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.989656    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.992325    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.189580    3774 request.go:629] Waited for 196.795704ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189705    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189717    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.189728    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.189736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.193091    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.193587    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.193598    3774 pod_ready.go:81] duration metric: took 402.889494ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.193607    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.388771    3774 request.go:629] Waited for 195.112594ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388925    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388937    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.388951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.388957    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.391994    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.589664    3774 request.go:629] Waited for 197.220516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589694    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589699    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.589737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.589742    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.683154    3774 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0725 10:49:15.683671    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.683684    3774 pod_ready.go:81] duration metric: took 490.064641ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.683693    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.789750    3774 request.go:629] Waited for 106.017908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789823    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789840    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.789847    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.789850    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.791774    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:15.988478    3774 request.go:629] Waited for 196.346928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988570    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.988587    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.988593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.990813    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.991161    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.991171    3774 pod_ready.go:81] duration metric: took 307.468405ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.991178    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.188563    3774 request.go:629] Waited for 197.337332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188614    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188625    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.188638    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.188644    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.191974    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.388347    3774 request.go:629] Waited for 195.920607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388377    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388382    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.388388    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.388392    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.390392    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:16.390667    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.390675    3774 pod_ready.go:81] duration metric: took 399.488144ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.390682    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.588555    3774 request.go:629] Waited for 197.808091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588661    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588673    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.588684    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.588693    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.591719    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.789759    3774 request.go:629] Waited for 197.31897ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789876    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789887    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.789898    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.789919    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.793070    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.793486    3774 pod_ready.go:92] pod "kube-proxy-mvbkh" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.793498    3774 pod_ready.go:81] duration metric: took 402.805905ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.793509    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.989321    3774 request.go:629] Waited for 195.762555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989391    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989399    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.989406    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.989412    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.991782    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.188598    3774 request.go:629] Waited for 196.377768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188628    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188633    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.188682    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.188688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.190695    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:17.190988    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.190998    3774 pod_ready.go:81] duration metric: took 397.476264ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.191012    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.388526    3774 request.go:629] Waited for 197.466794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388597    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388604    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.388613    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.388618    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.391737    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.588483    3774 request.go:629] Waited for 195.44881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588569    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588577    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.588586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.588593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.591164    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.591519    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.591528    3774 pod_ready.go:81] duration metric: took 400.505326ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.591535    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.789192    3774 request.go:629] Waited for 197.613853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789246    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789256    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.789265    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.789271    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.792807    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.989780    3774 request.go:629] Waited for 196.433914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989847    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989853    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.989859    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.989862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.991949    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.992236    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.992245    3774 pod_ready.go:81] duration metric: took 400.700976ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.992253    3774 pod_ready.go:38] duration metric: took 5.404058179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:17.992267    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:49:17.992318    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:49:18.004356    3774 api_server.go:72] duration metric: took 15.250996818s to wait for apiserver process to appear ...
	I0725 10:49:18.004369    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:49:18.004385    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:49:18.008895    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:49:18.008938    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:49:18.008944    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.008958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.008961    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.009486    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:49:18.009545    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:49:18.009554    3774 api_server.go:131] duration metric: took 5.181232ms to wait for apiserver health ...
	I0725 10:49:18.009561    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:49:18.188337    3774 request.go:629] Waited for 178.740534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188382    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188390    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.188435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.188439    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.193593    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.200076    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:49:18.200099    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.200103    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.200106    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.200112    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.200115    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.200119    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.200121    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.200123    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.200127    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.200131    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.200135    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.200139    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.200142    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.200147    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.200151    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.200154    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.200156    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.200160    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.200163    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.200166    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.200170    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.200173    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.200178    3774 system_pods.go:61] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.200181    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.200184    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.200186    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.200193    3774 system_pods.go:74] duration metric: took 190.622361ms to wait for pod list to return data ...
	I0725 10:49:18.200199    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:49:18.388524    3774 request.go:629] Waited for 188.275556ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388557    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388562    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.388570    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.388573    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.390924    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:18.391069    3774 default_sa.go:45] found service account: "default"
	I0725 10:49:18.391078    3774 default_sa.go:55] duration metric: took 190.872598ms for default service account to be created ...
	I0725 10:49:18.391084    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:49:18.588425    3774 request.go:629] Waited for 197.306337ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588458    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588463    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.588469    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.588474    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.593698    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.599145    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:49:18.599162    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.599167    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.599170    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.599175    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.599194    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.599199    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.599204    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.599207    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.599213    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.599216    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.599223    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.599227    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.599232    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.599237    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.599241    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.599245    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.599249    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.599253    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.599272    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.599280    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.599286    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.599291    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.599295    3774 system_pods.go:89] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.599298    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.599301    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.599304    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.599309    3774 system_pods.go:126] duration metric: took 208.21895ms to wait for k8s-apps to be running ...
	I0725 10:49:18.599322    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:49:18.599377    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:49:18.611737    3774 system_svc.go:56] duration metric: took 12.412676ms WaitForService to wait for kubelet
	I0725 10:49:18.611752    3774 kubeadm.go:582] duration metric: took 15.858385641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:49:18.611763    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:49:18.789774    3774 request.go:629] Waited for 177.962916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789887    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789910    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.789924    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.789930    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.793551    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:18.794424    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794438    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794448    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794452    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794455    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794458    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794462    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794465    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794468    3774 node_conditions.go:105] duration metric: took 182.699541ms to run NodePressure ...
	I0725 10:49:18.794476    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:49:18.794495    3774 start.go:255] writing updated cluster config ...
	I0725 10:49:18.818273    3774 out.go:177] 
	I0725 10:49:18.857228    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:18.857312    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.879124    3774 out.go:177] * Starting "ha-485000-m03" control-plane node in "ha-485000" cluster
	I0725 10:49:18.920865    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:49:18.920897    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:49:18.921101    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:49:18.921120    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:49:18.921243    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.922716    3774 start.go:360] acquireMachinesLock for ha-485000-m03: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:49:18.922851    3774 start.go:364] duration metric: took 109.932µs to acquireMachinesLock for "ha-485000-m03"
	I0725 10:49:18.922881    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:49:18.922891    3774 fix.go:54] fixHost starting: m03
	I0725 10:49:18.923315    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:18.923351    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:18.932987    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51938
	I0725 10:49:18.933376    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:18.933781    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:18.933802    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:18.934032    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:18.934154    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:18.934244    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetState
	I0725 10:49:18.934342    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:18.934436    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3293
	I0725 10:49:18.935384    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:18.935414    3774 fix.go:112] recreateIfNeeded on ha-485000-m03: state=Stopped err=<nil>
	I0725 10:49:18.935426    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	W0725 10:49:18.935534    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:49:18.973151    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m03" ...
	I0725 10:49:19.030981    3774 main.go:141] libmachine: (ha-485000-m03) Calling .Start
	I0725 10:49:19.031240    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.031347    3774 main.go:141] libmachine: (ha-485000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid
	I0725 10:49:19.033047    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:19.033059    3774 main.go:141] libmachine: (ha-485000-m03) DBG | pid 3293 is in state "Stopped"
	I0725 10:49:19.033079    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid...
	I0725 10:49:19.033332    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Using UUID 8bec60ab-aefc-4069-8cc1-870073932ec4
	I0725 10:49:19.063128    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Generated MAC f2:df:a:a6:c4:51
	I0725 10:49:19.063160    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:49:19.063291    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063329    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063383    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8bec60ab-aefc-4069-8cc1-870073932ec4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:49:19.063428    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8bec60ab-aefc-4069-8cc1-870073932ec4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:49:19.063456    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:49:19.065029    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Pid is 3803
	I0725 10:49:19.065402    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Attempt 0
	I0725 10:49:19.065431    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.065485    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3803
	I0725 10:49:19.067650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Searching for f2:df:a:a6:c4:51 in /var/db/dhcpd_leases ...
	I0725 10:49:19.067771    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:49:19.067897    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:49:19.067935    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetConfigRaw
	I0725 10:49:19.067949    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:49:19.067969    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:49:19.067982    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:49:19.068002    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found match: f2:df:a:a6:c4:51
	I0725 10:49:19.068014    3774 main.go:141] libmachine: (ha-485000-m03) DBG | IP: 192.169.0.7
	I0725 10:49:19.068702    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:19.069020    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:19.069625    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:49:19.069642    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:19.069816    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:19.069967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:19.070130    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070266    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070381    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:19.070553    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:19.070752    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:19.070765    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:49:19.074618    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:49:19.083650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:49:19.084521    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.084551    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.084569    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.084582    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.479143    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:49:19.479156    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:49:19.594100    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.594116    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.594124    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.594130    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.595151    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:49:19.595161    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:49:25.234237    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:49:25.234303    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:49:25.234325    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:49:25.257972    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:49:54.133594    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:49:54.133609    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133749    3774 buildroot.go:166] provisioning hostname "ha-485000-m03"
	I0725 10:49:54.133758    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133861    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.133950    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.134036    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134136    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134221    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.134384    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.134539    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.134553    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m03 && echo "ha-485000-m03" | sudo tee /etc/hostname
	I0725 10:49:54.200648    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m03
	
	I0725 10:49:54.200663    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.200790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.200879    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.200961    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.201044    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.201180    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.201420    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.201433    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:49:54.263039    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:49:54.263056    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:49:54.263070    3774 buildroot.go:174] setting up certificates
	I0725 10:49:54.263076    3774 provision.go:84] configureAuth start
	I0725 10:49:54.263083    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.263216    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:54.263306    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.263391    3774 provision.go:143] copyHostCerts
	I0725 10:49:54.263427    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263491    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:49:54.263497    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263638    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:49:54.263837    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263878    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:49:54.263883    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:49:54.264113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264157    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:49:54.264162    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264238    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:49:54.264391    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m03 san=[127.0.0.1 192.169.0.7 ha-485000-m03 localhost minikube]
	I0725 10:49:54.466588    3774 provision.go:177] copyRemoteCerts
	I0725 10:49:54.466634    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:49:54.466649    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.466797    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.466896    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.466976    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.467051    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:54.501149    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:49:54.501228    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:49:54.520581    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:49:54.520648    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:49:54.540217    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:49:54.540294    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:49:54.559440    3774 provision.go:87] duration metric: took 296.351002ms to configureAuth
	I0725 10:49:54.559454    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:49:54.559628    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:54.559646    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:54.559774    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.559865    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.559954    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560037    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.560211    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.560343    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.560351    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:49:54.614691    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:49:54.614704    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:49:54.614776    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:49:54.614790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.614925    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.615019    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615123    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615214    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.615348    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.615499    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.615544    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:49:54.680274    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:49:54.680293    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.680409    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.680496    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680575    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680666    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.680823    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.680965    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.680985    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:49:56.345971    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:49:56.345985    3774 machine.go:97] duration metric: took 37.275852067s to provisionDockerMachine
	I0725 10:49:56.345993    3774 start.go:293] postStartSetup for "ha-485000-m03" (driver="hyperkit")
	I0725 10:49:56.346001    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:49:56.346022    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.346236    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:49:56.346252    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.346356    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.346471    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.346553    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.346642    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.380977    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:49:56.384488    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:49:56.384501    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:49:56.384614    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:49:56.384806    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:49:56.384814    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:49:56.385027    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:49:56.392745    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:56.412613    3774 start.go:296] duration metric: took 66.60995ms for postStartSetup
	I0725 10:49:56.412634    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.412808    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:49:56.412819    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.412903    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.412988    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.413073    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.413150    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.446365    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:49:56.446421    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:49:56.498637    3774 fix.go:56] duration metric: took 37.575244811s for fixHost
	I0725 10:49:56.498667    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.498812    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.498919    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499016    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.499238    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:56.499386    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:56.499396    3774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 10:49:56.555439    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929796.646350339
	
	I0725 10:49:56.555451    3774 fix.go:216] guest clock: 1721929796.646350339
	I0725 10:49:56.555456    3774 fix.go:229] Guest: 2024-07-25 10:49:56.646350339 -0700 PDT Remote: 2024-07-25 10:49:56.498656 -0700 PDT m=+90.525587748 (delta=147.694339ms)
	I0725 10:49:56.555467    3774 fix.go:200] guest clock delta is within tolerance: 147.694339ms
	I0725 10:49:56.555470    3774 start.go:83] releasing machines lock for "ha-485000-m03", held for 37.632106624s
	I0725 10:49:56.555487    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.555618    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:56.578528    3774 out.go:177] * Found network options:
	I0725 10:49:56.599071    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0725 10:49:56.620070    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.620097    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.620115    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620743    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.621083    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:49:56.621119    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	W0725 10:49:56.621187    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.621219    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.621317    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:49:56.621327    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621338    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.621512    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621541    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621703    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.621726    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621905    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.621918    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.622026    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	W0725 10:49:56.652986    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:49:56.653049    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:49:56.706201    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:49:56.706215    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.706281    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:56.721415    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:49:56.730471    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:49:56.739423    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:49:56.739473    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:49:56.748522    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.757654    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:49:56.766745    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.775683    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:49:56.785223    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:49:56.794261    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:49:56.803167    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:49:56.812374    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:49:56.820684    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:49:56.828942    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:56.928553    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:49:56.947129    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.947197    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:49:56.959097    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.970776    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:49:56.984568    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.994746    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.005192    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:49:57.026508    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.037792    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:57.052754    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:49:57.055697    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:49:57.062771    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:49:57.076283    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:49:57.171356    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:49:57.271094    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:49:57.271118    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:49:57.288365    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:57.388283    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:59.699253    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.310915582s)
	I0725 10:49:59.699313    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:59.710426    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:59.721257    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:59.814380    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:59.915914    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.023205    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:50:00.037086    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:50:00.048065    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.150126    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:50:00.211936    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:50:00.212049    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:50:00.216805    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:50:00.216881    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:50:00.220095    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:50:00.244551    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:50:00.244627    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.264059    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.304819    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:50:00.346760    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:50:00.368834    3774 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0725 10:50:00.394661    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:50:00.395004    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:50:00.399400    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:00.409863    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:50:00.410047    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:00.410280    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.410302    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.419259    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51960
	I0725 10:50:00.419623    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.419945    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.419955    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.420150    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.420256    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:50:00.420338    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:00.420423    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:50:00.421353    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:50:00.421593    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.421616    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.430525    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51962
	I0725 10:50:00.430861    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.431198    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.431211    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.431432    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.431553    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:50:00.431647    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.7
	I0725 10:50:00.431652    3774 certs.go:194] generating shared ca certs ...
	I0725 10:50:00.431664    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:50:00.431829    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:50:00.431909    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:50:00.431917    3774 certs.go:256] generating profile certs ...
	I0725 10:50:00.432022    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:50:00.432138    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.cfc1c64d
	I0725 10:50:00.432211    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:50:00.432218    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:50:00.432239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:50:00.432260    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:50:00.432278    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:50:00.432295    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:50:00.432331    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:50:00.432368    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:50:00.432392    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:50:00.432488    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:50:00.432538    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:50:00.432546    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:50:00.432580    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:50:00.432612    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:50:00.432641    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:50:00.432707    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:00.432744    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.432766    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.432786    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.432812    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:50:00.432904    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:50:00.432984    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:50:00.433067    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:50:00.433144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:50:00.457759    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0725 10:50:00.461662    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:50:00.470495    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0725 10:50:00.474031    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:50:00.483513    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:50:00.486473    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:50:00.495156    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:50:00.498306    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:50:00.507713    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:50:00.510936    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:50:00.519399    3774 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0725 10:50:00.523489    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:50:00.532193    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:50:00.552954    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:50:00.573061    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:50:00.593555    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:50:00.613482    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:50:00.633390    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:50:00.653522    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:50:00.673721    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:50:00.693637    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:50:00.713744    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:50:00.733957    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:50:00.753667    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:50:00.767462    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:50:00.781289    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:50:00.795165    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:50:00.808987    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:50:00.823098    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:50:00.836829    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:50:00.850678    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:50:00.854970    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:50:00.863536    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867064    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867101    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.871309    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:50:00.879945    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:50:00.888535    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892087    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892134    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.896459    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:50:00.905152    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:50:00.913558    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917014    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917052    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.921381    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:50:00.929938    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:50:00.933439    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:50:00.937823    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:50:00.942083    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:50:00.946333    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:50:00.950751    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:50:00.954972    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:50:00.959334    3774 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0725 10:50:00.959393    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:50:00.959413    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:50:00.959451    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:50:00.974500    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:50:00.974542    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:50:00.974599    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:50:00.983391    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:50:00.983448    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:50:00.992451    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:50:01.006435    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:50:01.020173    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:50:01.034556    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:50:01.037605    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:01.047456    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.141578    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.156504    3774 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:50:01.156711    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:01.178105    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:50:01.219418    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.336829    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.353501    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:50:01.353708    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:50:01.353744    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:50:01.353906    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.353944    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:01.353949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.353955    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.353958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.356860    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.357167    3774 node_ready.go:49] node "ha-485000-m03" has status "Ready":"True"
	I0725 10:50:01.357178    3774 node_ready.go:38] duration metric: took 3.262682ms for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.357193    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:01.357239    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:01.357246    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.357252    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.357257    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.362406    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:01.367672    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:01.367737    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.367747    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.367754    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.367758    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.370874    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:01.372501    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.372645    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.372667    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.372873    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.375539    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.868232    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.868248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.868255    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.868258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.870654    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.871201    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.871209    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.871215    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.871218    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.873125    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:02.368740    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.368762    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.368777    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.368783    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.377869    3774 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0725 10:50:02.379271    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.379283    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.379290    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.379293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.385746    3774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 10:50:02.869348    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.869364    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.869371    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.869375    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.871499    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:02.872136    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.872144    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.872150    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.872155    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.874538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.367877    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.367889    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.367896    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.367899    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.371090    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:03.371735    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.371743    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.371750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.371753    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.374291    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.374875    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:03.869639    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.869654    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.869661    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.869665    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.871755    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.872260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.872269    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.872275    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.872280    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.874379    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.369638    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.369657    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.369703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.369708    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.372772    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:04.373269    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.373277    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.373282    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.373299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.375712    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.869487    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.869502    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.869508    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.869512    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.871992    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.872445    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.872453    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.872459    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.872463    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.874498    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.368695    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.368711    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.368717    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.368721    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.370818    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.371324    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.371336    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.371342    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.371346    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.373135    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.869401    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.869464    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.869473    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.869478    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.871690    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.872113    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.872121    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.872153    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.872158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.874010    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.874326    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:06.369093    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.369156    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.369165    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.369170    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372202    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:06.372616    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.372624    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.372630    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372639    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.374312    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:06.869508    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.869524    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.869531    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.869536    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.872066    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:06.872572    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.872580    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.872586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.872589    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.874648    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.368133    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.368154    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.368178    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.368182    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.370664    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.371146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.371153    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.371158    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.371165    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.372921    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.868556    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.868570    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.868576    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.868580    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.870520    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.871027    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.871035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.871041    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.871044    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.872683    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.368226    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.368242    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.368249    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.368253    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.370410    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.370844    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.370852    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.370858    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.370862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.372615    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.373006    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:08.868118    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.868176    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.868187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.868194    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.870430    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.870901    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.870909    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.870915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.870919    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.872656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.368941    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.368959    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.368966    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.368970    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.371000    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.371430    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.371438    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.371444    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.371447    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.372991    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.868002    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.868047    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.868058    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.868062    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.870388    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.870824    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.870832    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.870838    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.870842    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.872677    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.369049    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.369064    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.369071    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.369074    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.371043    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.371476    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.371483    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.371489    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.371492    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.372986    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.373378    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:10.869188    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.869207    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.869244    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.869251    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.871587    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:10.872055    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.872063    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.872068    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.872071    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.873650    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.368644    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:11.368660    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.368670    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.368674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.370971    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.371396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.371404    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.371410    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.371414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.373238    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.373609    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.373619    3774 pod_ready.go:81] duration metric: took 10.005797843s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373657    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373710    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:50:11.373716    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.373722    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.373728    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.375656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.376026    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.376035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.376041    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.376044    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378143    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.378749    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.378758    3774 pod_ready.go:81] duration metric: took 5.088497ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378765    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378806    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:50:11.378810    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.378816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.380851    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.381363    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.381371    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.381377    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.381381    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.383335    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.383692    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.383702    3774 pod_ready.go:81] duration metric: took 4.931732ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383708    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383749    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:50:11.383754    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.383760    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.383764    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.385637    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.386081    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:11.386088    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.386094    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.386097    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.388000    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.388355    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.388366    3774 pod_ready.go:81] duration metric: took 4.652083ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388374    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388416    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.388421    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.388427    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.388431    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.390189    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.390609    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.390617    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.390622    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.390633    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.392651    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.889083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.889128    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.889138    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.889143    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.891738    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.892209    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.892216    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.892221    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.892223    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.893988    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.390496    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.390512    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.390518    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.390521    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.392470    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.392850    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.392858    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.392864    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.392869    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.394421    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.889565    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.889659    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.889673    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.889678    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.892819    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:12.893389    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.893400    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.893409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.893414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.895094    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.389794    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.389809    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.389816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.389820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.391840    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.392224    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.392231    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.392237    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.392241    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.393832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.394204    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:13.889796    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.889812    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.889823    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.892206    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.892725    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.892732    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.892737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.892747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.894556    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.390135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.390150    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.390156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.390160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.392174    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:14.392583    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.392590    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.392596    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.392612    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.394268    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.889560    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.889573    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.889579    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.889582    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.891400    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.891841    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.891848    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.891854    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.891860    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.893620    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.388714    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.388793    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.388820    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.388827    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.391756    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:15.392117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.392125    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.392134    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.392139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.393815    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.890117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.890137    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.890149    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.890157    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.893399    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:15.894228    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.894235    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.894241    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.894245    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.895912    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.896234    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:16.389894    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.389912    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.389920    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.389924    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.391955    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:16.392480    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.392488    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.392493    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.392505    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.394310    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:16.889781    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.889806    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.889826    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.893556    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:16.894315    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.894326    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.894333    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.894337    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.896126    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.388575    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.388587    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.388602    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.388605    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.390595    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.391157    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.391165    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.391170    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.391173    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.392829    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.889351    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.889367    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.889373    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.889376    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.891538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:17.891973    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.891981    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.891987    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.891990    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.893775    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.389267    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.389290    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.389304    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.389312    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.392450    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:18.392859    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.392867    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.392872    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.392875    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.394522    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.394948    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:18.890099    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.890111    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.890118    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.890121    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.892565    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:18.892967    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.892975    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.892981    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.892985    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.894937    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.389827    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.389924    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.389936    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.389942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.392795    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.393525    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.393536    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.393544    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.393553    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.395412    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.889931    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.889949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.889958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.889962    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.892495    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.893008    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.893015    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.893021    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.893024    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.894590    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.390037    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.390057    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.390068    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.390074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.393277    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:20.393997    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.394008    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.394016    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.394021    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.395790    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.396084    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:20.889112    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.889127    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.889135    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.889139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.891727    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.892142    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.892149    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.892155    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.892158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.893897    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.894418    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.894427    3774 pod_ready.go:81] duration metric: took 9.505922344s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894440    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894470    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:50:20.894475    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.894481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.894485    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.896512    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.897090    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.897097    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.897103    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.897106    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.898896    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.899262    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.899272    3774 pod_ready.go:81] duration metric: took 4.826573ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899278    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899309    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:50:20.899314    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.899319    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.899324    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.901127    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.901676    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:20.901683    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.901689    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.901692    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.903167    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.903492    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.903501    3774 pod_ready.go:81] duration metric: took 4.217035ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903507    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903535    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:50:20.903539    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.903548    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.903554    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.905060    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.905423    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.905430    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.905435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.905438    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.906893    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.907231    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.907242    3774 pod_ready.go:81] duration metric: took 3.730011ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907249    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907283    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:50:20.907288    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.907293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.907298    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.908832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.909193    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.909200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.909206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.909211    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.910822    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.911148    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.911157    3774 pod_ready.go:81] duration metric: took 3.903336ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.911164    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.090249    3774 request.go:629] Waited for 179.043752ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090326    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090337    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.090348    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.090357    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.093923    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:21.289919    3774 request.go:629] Waited for 195.572332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289952    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289956    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.289963    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.289968    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.292331    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.292914    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.292923    3774 pod_ready.go:81] duration metric: took 381.74891ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.292930    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.490166    3774 request.go:629] Waited for 197.198888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490242    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.490254    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.490258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.492317    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.690619    3774 request.go:629] Waited for 197.80318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690692    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690700    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.690707    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.690712    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.693156    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.693612    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.693621    3774 pod_ready.go:81] duration metric: took 400.680496ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.693628    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.889496    3774 request.go:629] Waited for 195.831751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889578    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889584    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.889590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.889594    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.891941    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.089475    3774 request.go:629] Waited for 196.98259ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089532    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089542    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.089554    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.089562    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.092579    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.093142    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.093152    3774 pod_ready.go:81] duration metric: took 399.51252ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.093159    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.289233    3774 request.go:629] Waited for 195.965994ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289304    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289317    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.289329    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.289336    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.292489    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.491051    3774 request.go:629] Waited for 197.933507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491147    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.491172    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.491181    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.494764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.495189    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.495199    3774 pod_ready.go:81] duration metric: took 402.028626ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.495206    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.690648    3774 request.go:629] Waited for 195.401863ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690726    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690734    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.690741    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.690747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.693258    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.889677    3774 request.go:629] Waited for 195.917412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889724    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889761    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.889767    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.889773    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.892446    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.892903    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.892913    3774 pod_ready.go:81] duration metric: took 397.69615ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.892920    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.090761    3774 request.go:629] Waited for 197.803166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090814    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090819    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.090826    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.090831    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.093064    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.290673    3774 request.go:629] Waited for 196.835784ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290721    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290730    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.290752    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.290763    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.293787    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:23.294164    3774 pod_ready.go:97] node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294178    3774 pod_ready.go:81] duration metric: took 401.248534ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	E0725 10:50:23.294187    3774 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294199    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.490423    3774 request.go:629] Waited for 196.114187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490462    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490468    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.490475    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.490481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.493175    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.689290    3774 request.go:629] Waited for 195.78427ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689415    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689424    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.689435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.689442    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.692307    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.692985    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:23.692998    3774 pod_ready.go:81] duration metric: took 398.785835ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.693009    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.890957    3774 request.go:629] Waited for 197.902466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891038    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891044    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.891050    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.891055    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.893361    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.090079    3774 request.go:629] Waited for 196.359986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090164    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090175    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.090187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.090195    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.094081    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.094638    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.094651    3774 pod_ready.go:81] duration metric: took 401.630539ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.094660    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.290893    3774 request.go:629] Waited for 196.185136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291015    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291026    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.291038    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.291045    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.294065    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.489582    3774 request.go:629] Waited for 194.956133ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489621    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489639    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.489681    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.489688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.492299    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.492641    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.492651    3774 pod_ready.go:81] duration metric: took 397.980834ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.492659    3774 pod_ready.go:38] duration metric: took 23.135149255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:24.492671    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:50:24.492734    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:50:24.505745    3774 api_server.go:72] duration metric: took 23.348906591s to wait for apiserver process to appear ...
	I0725 10:50:24.505757    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:50:24.505768    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:50:24.508893    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:50:24.508926    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:50:24.508931    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.508937    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.508942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.509529    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:50:24.509577    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:50:24.509588    3774 api_server.go:131] duration metric: took 3.825788ms to wait for apiserver health ...
	I0725 10:50:24.509593    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:50:24.689653    3774 request.go:629] Waited for 180.026101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689691    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689696    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.689703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.689707    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.694162    3774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 10:50:24.699605    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:50:24.699621    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:24.699626    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:24.699629    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:24.699632    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:24.699635    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:24.699638    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:24.699641    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:24.699644    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:24.699647    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:24.699651    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:24.699654    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:24.699657    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:24.699660    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:24.699662    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:24.699665    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:24.699668    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:24.699670    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:24.699672    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:24.699676    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:24.699680    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:24.699682    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:24.699685    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:24.699687    3774 system_pods.go:61] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:24.699690    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:24.699692    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:24.699697    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:24.699701    3774 system_pods.go:74] duration metric: took 190.102528ms to wait for pod list to return data ...
	I0725 10:50:24.699707    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:50:24.890642    3774 request.go:629] Waited for 190.892646ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890727    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890735    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.890743    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.890750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.893133    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.893199    3774 default_sa.go:45] found service account: "default"
	I0725 10:50:24.893209    3774 default_sa.go:55] duration metric: took 193.493853ms for default service account to be created ...
	I0725 10:50:24.893214    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:50:25.089856    3774 request.go:629] Waited for 196.580095ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089959    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089969    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.089980    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.089989    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.095665    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:25.100358    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:50:25.100371    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:25.100375    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:25.100378    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:25.100382    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:25.100386    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:25.100389    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:25.100393    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:25.100396    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:25.100400    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:25.100415    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:25.100424    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:25.100429    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:25.100435    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:25.100453    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:25.100457    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:25.100460    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:25.100465    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:25.100469    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:25.100472    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:25.100476    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:25.100481    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:25.100485    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:25.100489    3774 system_pods.go:89] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:25.100492    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:25.100495    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:25.100501    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:25.100509    3774 system_pods.go:126] duration metric: took 207.289229ms to wait for k8s-apps to be running ...
	I0725 10:50:25.100516    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:50:25.100565    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:50:25.111946    3774 system_svc.go:56] duration metric: took 11.425609ms WaitForService to wait for kubelet
	I0725 10:50:25.111960    3774 kubeadm.go:582] duration metric: took 23.955115877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:50:25.111982    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:50:25.290128    3774 request.go:629] Waited for 178.101329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290194    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.290206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.290210    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.292310    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:25.293114    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293124    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293137    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293141    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293144    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293147    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293150    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293153    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293156    3774 node_conditions.go:105] duration metric: took 181.167189ms to run NodePressure ...
	I0725 10:50:25.293164    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:50:25.293180    3774 start.go:255] writing updated cluster config ...
	I0725 10:50:25.316414    3774 out.go:177] 
	I0725 10:50:25.353963    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:25.354081    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.376345    3774 out.go:177] * Starting "ha-485000-m04" worker node in "ha-485000" cluster
	I0725 10:50:25.418264    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:50:25.418295    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:50:25.418479    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:50:25.418492    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:50:25.418579    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.419248    3774 start.go:360] acquireMachinesLock for ha-485000-m04: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:50:25.419314    3774 start.go:364] duration metric: took 49.541µs to acquireMachinesLock for "ha-485000-m04"
	I0725 10:50:25.419336    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:50:25.419342    3774 fix.go:54] fixHost starting: m04
	I0725 10:50:25.419646    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:25.419671    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:25.428557    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51966
	I0725 10:50:25.428876    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:25.429185    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:25.429196    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:25.429408    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:25.429520    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.429598    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:50:25.429683    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.429771    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3386
	I0725 10:50:25.430679    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid 3386 missing from process table
	I0725 10:50:25.430699    3774 fix.go:112] recreateIfNeeded on ha-485000-m04: state=Stopped err=<nil>
	I0725 10:50:25.430707    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	W0725 10:50:25.430787    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:50:25.451265    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m04" ...
	I0725 10:50:25.492428    3774 main.go:141] libmachine: (ha-485000-m04) Calling .Start
	I0725 10:50:25.492574    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.492592    3774 main.go:141] libmachine: (ha-485000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid
	I0725 10:50:25.492648    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Using UUID c1175b3a-154e-40e8-a691-44a5e1615e54
	I0725 10:50:25.517781    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Generated MAC ba:e9:ef:e5:fe:75
	I0725 10:50:25.517800    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:50:25.517982    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518030    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518090    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c1175b3a-154e-40e8-a691-44a5e1615e54", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:50:25.518138    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c1175b3a-154e-40e8-a691-44a5e1615e54 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:50:25.518152    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:50:25.519588    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Pid is 3811
	I0725 10:50:25.520056    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Attempt 0
	I0725 10:50:25.520068    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.520140    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3811
	I0725 10:50:25.521299    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Searching for ba:e9:ef:e5:fe:75 in /var/db/dhcpd_leases ...
	I0725 10:50:25.521358    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:50:25.521373    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e1a8}
	I0725 10:50:25.521401    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:50:25.521437    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:50:25.521449    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:50:25.521464    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found match: ba:e9:ef:e5:fe:75
	I0725 10:50:25.521477    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetConfigRaw
	I0725 10:50:25.521524    3774 main.go:141] libmachine: (ha-485000-m04) DBG | IP: 192.169.0.8
	I0725 10:50:25.522369    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:25.522631    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.523230    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:50:25.523241    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.523348    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:25.523441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:25.523528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:25.523847    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:25.524050    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:25.524062    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:50:25.527797    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:50:25.536120    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:50:25.537142    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:25.537161    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:25.537173    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:25.537185    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:25.927659    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:50:25.927675    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:50:26.042400    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:26.042420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:26.042429    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:26.042435    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:26.043251    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:50:26.043265    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:50:31.642036    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:50:31.642050    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:50:31.642059    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:50:31.665420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:50:36.584591    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:50:36.584607    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584735    3774 buildroot.go:166] provisioning hostname "ha-485000-m04"
	I0725 10:50:36.584758    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584843    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.584928    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.585028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585116    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.585343    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.585486    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.585494    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m04 && echo "ha-485000-m04" | sudo tee /etc/hostname
	I0725 10:50:36.650595    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m04
	
	I0725 10:50:36.650612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.650747    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.650856    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.650943    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.651028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.651142    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.651292    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.651304    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:50:36.714348    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:50:36.714365    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:50:36.714375    3774 buildroot.go:174] setting up certificates
	I0725 10:50:36.714381    3774 provision.go:84] configureAuth start
	I0725 10:50:36.714387    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.714525    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:36.714638    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.714724    3774 provision.go:143] copyHostCerts
	I0725 10:50:36.714755    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714823    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:50:36.714829    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:50:36.715183    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715234    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:50:36.715240    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715334    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:50:36.715487    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715532    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:50:36.715542    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715621    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:50:36.715776    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m04 san=[127.0.0.1 192.169.0.8 ha-485000-m04 localhost minikube]
	I0725 10:50:36.928446    3774 provision.go:177] copyRemoteCerts
	I0725 10:50:36.928501    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:50:36.928518    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.928667    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.928766    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.928853    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.928937    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:36.963525    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:50:36.963602    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:50:36.983132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:50:36.983215    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 10:50:37.002467    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:50:37.002541    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:50:37.021539    3774 provision.go:87] duration metric: took 307.147237ms to configureAuth
	I0725 10:50:37.021553    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:50:37.021739    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:37.021752    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:37.021873    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.021953    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.022028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022114    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022191    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.022294    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.022425    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.022432    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:50:37.077868    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:50:37.077881    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:50:37.077955    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:50:37.077968    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.078088    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.078184    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078265    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078353    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.078490    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.078627    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.078675    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:50:37.144185    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:50:37.144203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.144342    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.144434    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144523    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144614    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.144742    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.144915    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.144928    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:50:38.716011    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:50:38.716027    3774 machine.go:97] duration metric: took 13.192614029s to provisionDockerMachine
	I0725 10:50:38.716035    3774 start.go:293] postStartSetup for "ha-485000-m04" (driver="hyperkit")
	I0725 10:50:38.716042    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:50:38.716057    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.716243    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:50:38.716257    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.716357    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.716441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.716528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.716625    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.757562    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:50:38.761046    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:50:38.761061    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:50:38.761168    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:50:38.761354    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:50:38.761360    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:50:38.761571    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:50:38.769992    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:38.800110    3774 start.go:296] duration metric: took 84.064957ms for postStartSetup
	I0725 10:50:38.800132    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.800311    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:50:38.800325    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.800410    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.800488    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.800581    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.800667    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.835801    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:50:38.835861    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:50:38.889497    3774 fix.go:56] duration metric: took 13.469972078s for fixHost
	I0725 10:50:38.889527    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.889666    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.889774    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889955    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.890084    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:38.890230    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:38.890238    3774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 10:50:38.946246    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929837.991587228
	
	I0725 10:50:38.946261    3774 fix.go:216] guest clock: 1721929837.991587228
	I0725 10:50:38.946267    3774 fix.go:229] Guest: 2024-07-25 10:50:37.991587228 -0700 PDT Remote: 2024-07-25 10:50:38.889513 -0700 PDT m=+132.915879826 (delta=-897.925772ms)
	I0725 10:50:38.946278    3774 fix.go:200] guest clock delta is within tolerance: -897.925772ms
	I0725 10:50:38.946282    3774 start.go:83] releasing machines lock for "ha-485000-m04", held for 13.526780386s
	I0725 10:50:38.946300    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.946427    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:38.970830    3774 out.go:177] * Found network options:
	I0725 10:50:38.991565    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0725 10:50:39.012683    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012705    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012716    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.012729    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013273    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013419    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013516    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:50:39.013540    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	W0725 10:50:39.013573    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013595    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013611    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.013662    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013696    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:50:39.013711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:39.013838    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014008    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014022    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014127    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:39.014253    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	W0725 10:50:39.045644    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:50:39.045702    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:50:39.092083    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:50:39.092097    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.092166    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.107318    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:50:39.116414    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:50:39.125304    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.125351    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:50:39.134448    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.143660    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:50:39.152627    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.161628    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:50:39.170919    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:50:39.179944    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:50:39.189118    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:50:39.198245    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:50:39.206354    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:50:39.214527    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.316588    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:50:39.336368    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.336436    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:50:39.358127    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.374161    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:50:39.391711    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.402592    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.413153    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:50:39.435734    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.446272    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.461753    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:50:39.464748    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:50:39.475362    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:50:39.488985    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:50:39.584434    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:50:39.693634    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.693658    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:50:39.707727    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.811725    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:51:40.826240    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01368475s)
	I0725 10:51:40.826323    3774 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 10:51:40.860544    3774 out.go:177] 
	W0725 10:51:40.881235    3774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 17:50:36 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491525254Z" level=info msg="Starting up"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491953665Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.492498920Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.507900444Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.528985080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529119487Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529190184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529227367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529382906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529449495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529613424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529662631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529706898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529742376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529932288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.530230915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531799836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531853394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532013423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532060003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532224150Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532284958Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533667564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533767716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533813069Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533911547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533958502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534060695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534298445Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534406555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534446041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534478140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534510008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534540807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534571096Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534602037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534636987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534677965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534711495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534740942Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534776402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534811869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534850836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534886582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534922068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534956302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534986255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535016075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535048332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535088208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535121326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535150501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535236221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535275301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535313456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535345038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535373791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535440294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535482913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535628751Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535672410Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535703274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535734362Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535762669Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535960514Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536080093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536142938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536179093Z" level=info msg="containerd successfully booted in 0.029080s"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.510927923Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.523832948Z" level=info msg="Loading containers: start."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.618418659Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.680635969Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.724258639Z" level=info msg="Loading containers: done."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734502052Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734720064Z" level=info msg="Daemon has completed initialization"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.758872412Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.759079256Z" level=info msg="API listen on [::]:2376"
	Jul 25 17:50:37 ha-485000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.869455528Z" level=info msg="Processing signal 'terminated'"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870413697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870828965Z" level=info msg="Daemon shutdown complete"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870895825Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870897244Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 17:50:38 ha-485000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 dockerd[1172]: time="2024-07-25T17:50:39.901687066Z" level=info msg="Starting up"
	Jul 25 17:51:40 ha-485000-m04 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 17:50:36 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491525254Z" level=info msg="Starting up"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491953665Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.492498920Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.507900444Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.528985080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529119487Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529190184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529227367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529382906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529449495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529613424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529662631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529706898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529742376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529932288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.530230915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531799836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531853394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532013423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532060003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532224150Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532284958Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533667564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533767716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533813069Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533911547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533958502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534060695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534298445Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534406555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534446041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534478140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534510008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534540807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534571096Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534602037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534636987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534677965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534711495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534740942Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534776402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534811869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534850836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534886582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534922068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534956302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534986255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535016075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535048332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535088208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535121326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535150501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535236221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535275301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535313456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535345038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535373791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535440294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535482913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535628751Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535672410Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535703274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535734362Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535762669Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535960514Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536080093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536142938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536179093Z" level=info msg="containerd successfully booted in 0.029080s"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.510927923Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.523832948Z" level=info msg="Loading containers: start."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.618418659Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.680635969Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.724258639Z" level=info msg="Loading containers: done."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734502052Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734720064Z" level=info msg="Daemon has completed initialization"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.758872412Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.759079256Z" level=info msg="API listen on [::]:2376"
	Jul 25 17:50:37 ha-485000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.869455528Z" level=info msg="Processing signal 'terminated'"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870413697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870828965Z" level=info msg="Daemon shutdown complete"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870895825Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870897244Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 17:50:38 ha-485000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 dockerd[1172]: time="2024-07-25T17:50:39.901687066Z" level=info msg="Starting up"
	Jul 25 17:51:40 ha-485000-m04 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 10:51:40.881304    3774 out.go:239] * 
	* 
	W0725 10:51:40.882170    3774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:51:40.944200    3774 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-485000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-485000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-485000 -n ha-485000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 logs -n 25: (3.327459505s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m02:/home/docker/cp-test_ha-485000-m03_ha-485000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m02 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m03_ha-485000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04:/home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m04 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp testdata/cp-test.txt                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000:/home/docker/cp-test_ha-485000-m04_ha-485000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000 sudo cat                                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m02:/home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m02 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03:/home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m03 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-485000 node stop m02 -v=7                                                                                                 | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-485000 node start m02 -v=7                                                                                                | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-485000 -v=7                                                                                                       | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-485000 -v=7                                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:48 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-485000 --wait=true -v=7                                                                                                | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:48 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-485000                                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:51 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:48:26
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:48:26.008505    3774 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:48:26.008703    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008708    3774 out.go:304] Setting ErrFile to fd 2...
	I0725 10:48:26.008712    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008889    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:48:26.010330    3774 out.go:298] Setting JSON to false
	I0725 10:48:26.034230    3774 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2876,"bootTime":1721926830,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:48:26.034337    3774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:48:26.057780    3774 out.go:177] * [ha-485000] minikube v1.33.1 on Darwin 14.5
	I0725 10:48:26.099403    3774 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:48:26.099443    3774 notify.go:220] Checking for updates...
	I0725 10:48:26.142252    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:26.163519    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:48:26.184535    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:48:26.205465    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 10:48:26.226618    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:48:26.248320    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:26.248484    3774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:48:26.249112    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.249222    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.258893    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51885
	I0725 10:48:26.259439    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.260047    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.260058    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.260427    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.260644    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.289300    3774 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 10:48:26.331665    3774 start.go:297] selected driver: hyperkit
	I0725 10:48:26.331692    3774 start.go:901] validating driver "hyperkit" against &{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.331911    3774 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:48:26.332099    3774 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.332295    3774 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:48:26.342212    3774 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:48:26.348291    3774 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.348316    3774 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:48:26.351632    3774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:48:26.351670    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:26.351677    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:26.351755    3774 start.go:340] cluster config:
	{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.351859    3774 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.394511    3774 out.go:177] * Starting "ha-485000" primary control-plane node in "ha-485000" cluster
	I0725 10:48:26.415566    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:26.415642    3774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 10:48:26.415668    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:26.415915    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:26.415934    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:26.416129    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.417069    3774 start.go:360] acquireMachinesLock for ha-485000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:26.417183    3774 start.go:364] duration metric: took 90.924µs to acquireMachinesLock for "ha-485000"
	I0725 10:48:26.417209    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:26.417242    3774 fix.go:54] fixHost starting: 
	I0725 10:48:26.417573    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.417601    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.426437    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51887
	I0725 10:48:26.426806    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.427140    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.427151    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.427362    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.427487    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.427621    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:48:26.427738    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.427816    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3271
	I0725 10:48:26.428722    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.428789    3774 fix.go:112] recreateIfNeeded on ha-485000: state=Stopped err=<nil>
	I0725 10:48:26.428816    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	W0725 10:48:26.428913    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:26.450263    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000" ...
	I0725 10:48:26.492478    3774 main.go:141] libmachine: (ha-485000) Calling .Start
	I0725 10:48:26.492777    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.492834    3774 main.go:141] libmachine: (ha-485000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid
	I0725 10:48:26.494964    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.494992    3774 main.go:141] libmachine: (ha-485000) DBG | pid 3271 is in state "Stopped"
	I0725 10:48:26.495011    3774 main.go:141] libmachine: (ha-485000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid...
	I0725 10:48:26.495351    3774 main.go:141] libmachine: (ha-485000) DBG | Using UUID 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3
	I0725 10:48:26.602890    3774 main.go:141] libmachine: (ha-485000) DBG | Generated MAC 52:76:82:a1:51:13
	I0725 10:48:26.602911    3774 main.go:141] libmachine: (ha-485000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:26.603041    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603067    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603128    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:26.603166    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:26.603183    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:26.604450    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Pid is 3787
	I0725 10:48:26.604799    3774 main.go:141] libmachine: (ha-485000) DBG | Attempt 0
	I0725 10:48:26.604824    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.604870    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:48:26.606553    3774 main.go:141] libmachine: (ha-485000) DBG | Searching for 52:76:82:a1:51:13 in /var/db/dhcpd_leases ...
	I0725 10:48:26.606607    3774 main.go:141] libmachine: (ha-485000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:26.606642    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:26.606660    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:26.606672    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:48:26.606684    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e019}
	I0725 10:48:26.606696    3774 main.go:141] libmachine: (ha-485000) DBG | Found match: 52:76:82:a1:51:13
	I0725 10:48:26.606707    3774 main.go:141] libmachine: (ha-485000) DBG | IP: 192.169.0.5
	I0725 10:48:26.606731    3774 main.go:141] libmachine: (ha-485000) Calling .GetConfigRaw
	I0725 10:48:26.607371    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:26.607542    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.608260    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:26.608270    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.608385    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:26.608483    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:26.608567    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608654    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608755    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:26.608878    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:26.609107    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:26.609118    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:26.612320    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:26.665658    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:26.666425    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:26.666446    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:26.666486    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:26.666502    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.049138    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:27.049167    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:27.163716    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:27.163734    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:27.163745    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:27.163771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.164666    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:27.164679    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:32.750889    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:32.750945    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:32.750956    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:32.776771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:37.667735    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:37.667749    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.667914    3774 buildroot.go:166] provisioning hostname "ha-485000"
	I0725 10:48:37.667925    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.668027    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.668112    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.668192    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668288    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668362    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.668500    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.668656    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.668664    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000 && echo "ha-485000" | sudo tee /etc/hostname
	I0725 10:48:37.727283    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000
	
	I0725 10:48:37.727299    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.727438    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.727532    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727625    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727717    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.727855    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.727982    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.727993    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:37.785056    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:37.785076    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:37.785094    3774 buildroot.go:174] setting up certificates
	I0725 10:48:37.785101    3774 provision.go:84] configureAuth start
	I0725 10:48:37.785108    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.785245    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:37.785333    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.785431    3774 provision.go:143] copyHostCerts
	I0725 10:48:37.785463    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785523    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:37.785532    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785675    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:37.785906    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.785936    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:37.785941    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.786010    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:37.786157    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786185    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:37.786190    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786257    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:37.786415    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000 san=[127.0.0.1 192.169.0.5 ha-485000 localhost minikube]
	I0725 10:48:37.823550    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:37.823600    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:37.823615    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.823730    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.823832    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.823929    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.824016    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:37.858513    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:37.858593    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0725 10:48:37.877705    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:37.877768    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:37.897239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:37.897295    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:37.916551    3774 provision.go:87] duration metric: took 131.43608ms to configureAuth
	I0725 10:48:37.916563    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:37.916723    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:37.916742    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:37.916888    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.916985    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.917074    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917167    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917240    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.917351    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.917476    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.917483    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:37.966249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:37.966266    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:37.966344    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:37.966356    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.966476    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.966563    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966659    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966744    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.966879    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.967017    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.967059    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:38.025932    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:38.025951    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:38.026084    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:38.026186    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026311    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026409    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:38.026537    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:38.026681    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:38.026694    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:39.678604    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:39.678619    3774 machine.go:97] duration metric: took 13.070176391s to provisionDockerMachine
	I0725 10:48:39.678630    3774 start.go:293] postStartSetup for "ha-485000" (driver="hyperkit")
	I0725 10:48:39.678637    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:39.678650    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.678827    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:39.678844    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.678949    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.679038    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.679143    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.679235    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.716368    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:39.720567    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:39.720581    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:39.720675    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:39.720817    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:39.720823    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:39.720982    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:39.729186    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:39.758308    3774 start.go:296] duration metric: took 79.656539ms for postStartSetup
	I0725 10:48:39.758333    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.758515    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:39.758527    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.758630    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.758718    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.758821    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.758909    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.790766    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:39.790818    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:39.841417    3774 fix.go:56] duration metric: took 13.423999639s for fixHost
	I0725 10:48:39.841437    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.841579    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.841669    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841753    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841830    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.841969    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:39.842111    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:39.842118    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:48:39.893711    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929719.869493557
	
	I0725 10:48:39.893723    3774 fix.go:216] guest clock: 1721929719.869493557
	I0725 10:48:39.893729    3774 fix.go:229] Guest: 2024-07-25 10:48:39.869493557 -0700 PDT Remote: 2024-07-25 10:48:39.841427 -0700 PDT m=+13.869378775 (delta=28.066557ms)
	I0725 10:48:39.893749    3774 fix.go:200] guest clock delta is within tolerance: 28.066557ms
	I0725 10:48:39.893753    3774 start.go:83] releasing machines lock for "ha-485000", held for 13.476381445s
	I0725 10:48:39.893772    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.893900    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:39.894007    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894332    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894445    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894524    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:39.894561    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894577    3774 ssh_runner.go:195] Run: cat /version.json
	I0725 10:48:39.894588    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894652    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894680    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894746    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894757    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894831    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894854    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894930    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.894951    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.969105    3774 ssh_runner.go:195] Run: systemctl --version
	I0725 10:48:39.974344    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 10:48:39.978550    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:39.978588    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:39.992374    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:39.992386    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:39.992494    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.010041    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:40.018981    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:40.027827    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.027880    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:40.036849    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.045802    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:40.054565    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.063403    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:40.072492    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:40.081289    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:40.089964    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:40.098883    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:40.106915    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:40.114912    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.213735    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:40.232850    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:40.232927    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:40.247072    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.260328    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:40.277505    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.288634    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.302282    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:40.323941    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.334346    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.349841    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:40.352851    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:40.359956    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:40.373249    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:40.468940    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:40.562165    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.562232    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:40.576420    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.666619    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:48:42.973495    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306818339s)
	I0725 10:48:42.973567    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:48:42.984136    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:48:42.997023    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.007459    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:48:43.101460    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:48:43.206558    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.318643    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:48:43.332235    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.343300    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.439386    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:48:43.504079    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:48:43.504167    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:48:43.509100    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:48:43.509160    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:48:43.514298    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:48:43.540285    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:48:43.540359    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.556856    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.619142    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:48:43.619193    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:43.619596    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:48:43.624261    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.634147    3774 kubeadm.go:883] updating cluster {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 10:48:43.634230    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:43.634284    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.648086    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.648098    3774 docker.go:615] Images already preloaded, skipping extraction
	I0725 10:48:43.648178    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.661887    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.661905    3774 cache_images.go:84] Images are preloaded, skipping loading
	I0725 10:48:43.661914    3774 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0725 10:48:43.661994    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:48:43.662065    3774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 10:48:43.699921    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:43.699936    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:43.699949    3774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 10:48:43.699966    3774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-485000 NodeName:ha-485000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 10:48:43.700056    3774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-485000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 10:48:43.700077    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:48:43.700127    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:48:43.712809    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:48:43.712873    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:48:43.712925    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:48:43.721182    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:48:43.721226    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 10:48:43.728575    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0725 10:48:43.742374    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:48:43.755567    3774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 10:48:43.769800    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:48:43.783433    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:48:43.786504    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.795954    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.896290    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:48:43.910403    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.5
	I0725 10:48:43.910418    3774 certs.go:194] generating shared ca certs ...
	I0725 10:48:43.910428    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:43.910590    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:48:43.910647    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:48:43.910658    3774 certs.go:256] generating profile certs ...
	I0725 10:48:43.910746    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:48:43.910769    3774 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f
	I0725 10:48:43.910786    3774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0725 10:48:44.010960    3774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f ...
	I0725 10:48:44.010977    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f: {Name:mka1c7bb5889cefec4fa34bda59b0dccc014b849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011374    3774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f ...
	I0725 10:48:44.011384    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f: {Name:mk2a7443f9ec44bdbab1eccd742bb8d7bd46104e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011591    3774 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt
	I0725 10:48:44.011796    3774 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key
	I0725 10:48:44.012023    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:48:44.012033    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:48:44.012056    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:48:44.012075    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:48:44.012095    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:48:44.012113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:48:44.012132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:48:44.012152    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:48:44.012170    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:48:44.012249    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:48:44.012300    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:48:44.012308    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:48:44.012345    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:48:44.012379    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:48:44.012417    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:48:44.012485    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:44.012517    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.012537    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.012555    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.013001    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:48:44.040701    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:48:44.077388    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:48:44.112787    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:48:44.159876    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:48:44.190098    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:48:44.210542    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:48:44.230450    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:48:44.251339    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:48:44.271102    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:48:44.290804    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:48:44.310754    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 10:48:44.324090    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:48:44.328453    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:48:44.336951    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340313    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340348    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.344548    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:48:44.352831    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:48:44.360980    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364473    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364507    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.368814    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:48:44.377043    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:48:44.385344    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388809    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388844    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.393238    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:48:44.401504    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:48:44.404983    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:48:44.409808    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:48:44.414092    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:48:44.418841    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:48:44.423109    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:48:44.427402    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:48:44.432185    3774 kubeadm.go:392] StartCluster: {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:44.432302    3774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 10:48:44.448953    3774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 10:48:44.456669    3774 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 10:48:44.456681    3774 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 10:48:44.456727    3774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 10:48:44.465243    3774 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:48:44.465557    3774 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-485000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.465649    3774 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1195/kubeconfig needs updating (will repair): [kubeconfig missing "ha-485000" cluster setting kubeconfig missing "ha-485000" context setting]
	I0725 10:48:44.465837    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.466249    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.466441    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 10:48:44.466804    3774 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 10:48:44.466963    3774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 10:48:44.474340    3774 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0725 10:48:44.474351    3774 kubeadm.go:597] duration metric: took 17.665834ms to restartPrimaryControlPlane
	I0725 10:48:44.474370    3774 kubeadm.go:394] duration metric: took 42.188275ms to StartCluster
	I0725 10:48:44.474382    3774 settings.go:142] acquiring lock: {Name:mk4f7e43bf5353228d4c27f1f08450065f65cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.474454    3774 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.474852    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.475072    3774 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:48:44.475085    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:48:44.475106    3774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 10:48:44.475233    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.518490    3774 out.go:177] * Enabled addons: 
	I0725 10:48:44.539102    3774 addons.go:510] duration metric: took 64.005371ms for enable addons: enabled=[]
	I0725 10:48:44.539140    3774 start.go:246] waiting for cluster config update ...
	I0725 10:48:44.539164    3774 start.go:255] writing updated cluster config ...
	I0725 10:48:44.561436    3774 out.go:177] 
	I0725 10:48:44.582967    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.583098    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.605394    3774 out.go:177] * Starting "ha-485000-m02" control-plane node in "ha-485000" cluster
	I0725 10:48:44.647510    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:44.647544    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:44.647720    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:44.647738    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:44.647870    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.648902    3774 start.go:360] acquireMachinesLock for ha-485000-m02: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:44.649015    3774 start.go:364] duration metric: took 81.917µs to acquireMachinesLock for "ha-485000-m02"
	I0725 10:48:44.649041    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:44.649050    3774 fix.go:54] fixHost starting: m02
	I0725 10:48:44.649495    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:44.649529    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:44.659031    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51909
	I0725 10:48:44.659557    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:44.659989    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:44.660004    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:44.660364    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:44.660504    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.660702    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:48:44.660973    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.661074    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3731
	I0725 10:48:44.661956    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.662000    3774 fix.go:112] recreateIfNeeded on ha-485000-m02: state=Stopped err=<nil>
	I0725 10:48:44.662009    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	W0725 10:48:44.662092    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:44.704559    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m02" ...
	I0725 10:48:44.726283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .Start
	I0725 10:48:44.726558    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.726661    3774 main.go:141] libmachine: (ha-485000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid
	I0725 10:48:44.728373    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.728390    3774 main.go:141] libmachine: (ha-485000-m02) DBG | pid 3731 is in state "Stopped"
	I0725 10:48:44.728407    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid...
	I0725 10:48:44.728847    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Using UUID 528f0647-a045-4ab7-922b-886237fb4fc4
	I0725 10:48:44.756033    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Generated MAC c2:64:80:a8:d2:48
	I0725 10:48:44.756067    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:44.756191    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756227    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756275    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "528f0647-a045-4ab7-922b-886237fb4fc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:44.756319    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 528f0647-a045-4ab7-922b-886237fb4fc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:44.756334    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:44.757674    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Pid is 3792
	I0725 10:48:44.758132    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Attempt 0
	I0725 10:48:44.758146    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.758210    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3792
	I0725 10:48:44.759852    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Searching for c2:64:80:a8:d2:48 in /var/db/dhcpd_leases ...
	I0725 10:48:44.759913    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:44.759930    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:48:44.759945    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:44.759953    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:44.759960    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found match: c2:64:80:a8:d2:48
	I0725 10:48:44.759970    3774 main.go:141] libmachine: (ha-485000-m02) DBG | IP: 192.169.0.6
	I0725 10:48:44.759997    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetConfigRaw
	I0725 10:48:44.760701    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:44.760893    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.761371    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:44.761383    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.761484    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:44.761567    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:44.761671    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761791    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761906    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:44.762039    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:44.762188    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:44.762196    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:44.765251    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:44.773188    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:44.774148    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:44.774173    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:44.774196    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:44.774224    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.156825    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:45.156838    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:45.271856    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:45.271872    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:45.271881    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:45.271892    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.272766    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:45.272776    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:50.885003    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:50.885077    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:50.885089    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:50.908756    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:55.821181    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:55.821195    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821334    3774 buildroot.go:166] provisioning hostname "ha-485000-m02"
	I0725 10:48:55.821345    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821436    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.821525    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.821602    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821685    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821770    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.821916    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.822063    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.822074    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m02 && echo "ha-485000-m02" | sudo tee /etc/hostname
	I0725 10:48:55.882249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m02
	
	I0725 10:48:55.882268    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.882410    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.882498    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882588    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882688    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.882825    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.883013    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.883027    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:55.939117    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:55.939132    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:55.939142    3774 buildroot.go:174] setting up certificates
	I0725 10:48:55.939148    3774 provision.go:84] configureAuth start
	I0725 10:48:55.939154    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.939283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:55.939381    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.939461    3774 provision.go:143] copyHostCerts
	I0725 10:48:55.939491    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939543    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:55.939549    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939688    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:55.939893    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.939923    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:55.939928    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.940045    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:55.940199    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940230    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:55.940235    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940305    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:55.940447    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m02 san=[127.0.0.1 192.169.0.6 ha-485000-m02 localhost minikube]
	I0725 10:48:56.088970    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:56.089020    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:56.089034    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.089186    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.089282    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.089402    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.089501    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:56.122259    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:56.122325    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:56.141398    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:56.141472    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:48:56.160336    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:56.160401    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:56.179351    3774 provision.go:87] duration metric: took 240.193399ms to configureAuth
	I0725 10:48:56.179364    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:56.179528    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:56.179541    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:56.179672    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.179753    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.179827    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179907    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.180095    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.180218    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.180226    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:56.231701    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:56.231712    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:56.231785    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:56.231798    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.231926    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.232020    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232113    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232213    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.232352    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.232487    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.232547    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:56.292824    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:56.292843    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.292983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.293079    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293173    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293276    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.293398    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.293536    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.293548    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:57.936294    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:57.936308    3774 machine.go:97] duration metric: took 13.174752883s to provisionDockerMachine
	I0725 10:48:57.936315    3774 start.go:293] postStartSetup for "ha-485000-m02" (driver="hyperkit")
	I0725 10:48:57.936322    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:57.936333    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:57.936508    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:57.936520    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:57.936625    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:57.936725    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:57.936811    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:57.936919    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:57.973264    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:57.978182    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:57.978195    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:57.978293    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:57.978433    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:57.978439    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:57.978595    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:57.987699    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:58.019591    3774 start.go:296] duration metric: took 83.266386ms for postStartSetup
	I0725 10:48:58.019613    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.019795    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:58.019808    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.019904    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.019990    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.020087    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.020182    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.051727    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:58.051783    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:58.105518    3774 fix.go:56] duration metric: took 13.45628652s for fixHost
	I0725 10:48:58.105546    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.105686    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.105772    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105857    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105932    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.106046    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:58.106195    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:58.106205    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:48:58.159243    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929738.065939069
	
	I0725 10:48:58.159255    3774 fix.go:216] guest clock: 1721929738.065939069
	I0725 10:48:58.159260    3774 fix.go:229] Guest: 2024-07-25 10:48:58.065939069 -0700 PDT Remote: 2024-07-25 10:48:58.105535 -0700 PDT m=+32.133243684 (delta=-39.595931ms)
	I0725 10:48:58.159284    3774 fix.go:200] guest clock delta is within tolerance: -39.595931ms
	I0725 10:48:58.159289    3774 start.go:83] releasing machines lock for "ha-485000-m02", held for 13.510083839s
	I0725 10:48:58.159306    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.159443    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:58.185128    3774 out.go:177] * Found network options:
	I0725 10:48:58.204774    3774 out.go:177]   - NO_PROXY=192.169.0.5
	W0725 10:48:58.225878    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.225912    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226598    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226812    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226934    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:58.226975    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	W0725 10:48:58.227058    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.227182    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:48:58.227214    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.227277    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227471    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227493    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227612    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227649    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227722    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.227752    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227862    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	W0725 10:48:58.255968    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:58.256032    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:58.313620    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:58.313643    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.313758    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.330047    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:58.338245    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:58.346310    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.346349    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:58.354315    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.362619    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:58.370851    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.379085    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:58.387426    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:58.395620    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:58.403886    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:58.412116    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:58.419752    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:58.427324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.523289    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:58.542645    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.542713    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:58.555600    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.573132    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:58.586107    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.596266    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.606623    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:58.626833    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.637094    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.651924    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:58.654935    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:58.662286    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:58.675716    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:58.765779    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:58.866546    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.866576    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:58.880570    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.988028    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:01.326397    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338319421s)
	I0725 10:49:01.326462    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:01.336948    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:49:01.349778    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.360626    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:01.455101    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:01.569356    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.667972    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:49:01.681113    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.691490    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.801249    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:49:01.864595    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:49:01.864666    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:49:01.869013    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:49:01.869064    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:49:01.872470    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:49:01.897402    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:49:01.897474    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.915840    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.955682    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:49:01.997327    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:49:02.018327    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:49:02.018733    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:49:02.023070    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.032355    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:49:02.032524    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.032743    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.032766    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.041277    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51931
	I0725 10:49:02.041655    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.041981    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.041992    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.042211    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.042328    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:49:02.042405    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:02.042478    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:49:02.043429    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:49:02.043673    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.043701    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.052045    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51933
	I0725 10:49:02.052388    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.052739    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.052755    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.052955    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.053107    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:49:02.053221    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.6
	I0725 10:49:02.053230    3774 certs.go:194] generating shared ca certs ...
	I0725 10:49:02.053241    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:49:02.053422    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:49:02.053492    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:49:02.053502    3774 certs.go:256] generating profile certs ...
	I0725 10:49:02.053609    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:49:02.053685    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.71a9457c
	I0725 10:49:02.053735    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:49:02.053742    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:49:02.053762    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:49:02.053782    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:49:02.053800    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:49:02.053818    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:49:02.053836    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:49:02.053855    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:49:02.053873    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:49:02.053951    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:49:02.054004    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:49:02.054013    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:49:02.054048    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:49:02.054088    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:49:02.054118    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:49:02.054190    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:02.054224    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.054248    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.054268    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.054296    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:49:02.054399    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:49:02.054491    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:49:02.054572    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:49:02.054658    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:49:02.079258    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 10:49:02.082822    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:49:02.090699    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 10:49:02.093745    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:49:02.101464    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:49:02.104537    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:49:02.112169    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:49:02.115278    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:49:02.123703    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:49:02.126716    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:49:02.134446    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 10:49:02.137824    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:49:02.146212    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:49:02.166591    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:49:02.186453    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:49:02.205945    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:49:02.225778    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:49:02.245674    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:49:02.266075    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:49:02.286003    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:49:02.305311    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:49:02.325216    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:49:02.345019    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:49:02.365056    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:49:02.378609    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:49:02.392247    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:49:02.405745    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:49:02.419356    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:49:02.432750    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:49:02.446244    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:49:02.459911    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:49:02.464066    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:49:02.472406    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475732    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475780    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.479985    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:49:02.488332    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:49:02.496582    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.499979    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.500026    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.504179    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:49:02.513038    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:49:02.521433    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524771    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524804    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.528889    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:49:02.537109    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:49:02.540476    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:49:02.544803    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:49:02.548989    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:49:02.553131    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:49:02.557276    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:49:02.561375    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:49:02.565522    3774 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0725 10:49:02.565585    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:49:02.565604    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:49:02.565637    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:49:02.578066    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:49:02.578105    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:49:02.578150    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:49:02.585892    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:49:02.585944    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:49:02.593082    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:49:02.606549    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:49:02.620275    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:49:02.633644    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:49:02.636442    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.645688    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.737901    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.753131    3774 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:49:02.753310    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.774683    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:49:02.795324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.913425    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.928242    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:49:02.928448    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:49:02.928483    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:49:02.928641    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:02.928720    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:02.928725    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:02.928733    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:02.928736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.586324    3774 round_trippers.go:574] Response Status: 200 OK in 9657 milliseconds
	I0725 10:49:12.588091    3774 node_ready.go:49] node "ha-485000-m02" has status "Ready":"True"
	I0725 10:49:12.588104    3774 node_ready.go:38] duration metric: took 9.659318554s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:12.588113    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:12.588160    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:12.588167    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.588173    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.588177    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.640690    3774 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0725 10:49:12.646847    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.646903    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:49:12.646909    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.646915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.646917    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.650764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.651266    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.651274    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.651280    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.651283    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.655046    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.655360    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.655369    3774 pod_ready.go:81] duration metric: took 8.506318ms for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655377    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655414    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:49:12.655418    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.655424    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.655428    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.657266    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.657799    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.657806    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.657811    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.657815    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.659256    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.659713    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.659721    3774 pod_ready.go:81] duration metric: took 4.339404ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659728    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659764    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:49:12.659769    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.659775    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.659779    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.661249    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.661658    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.661665    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.661671    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.661674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.662972    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.663349    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.663356    3774 pod_ready.go:81] duration metric: took 3.624252ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663362    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:49:12.663402    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.663407    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.663412    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.665923    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:12.666288    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:12.666295    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.666300    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.666303    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.667727    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.668096    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.668110    3774 pod_ready.go:81] duration metric: took 4.73801ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668116    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:49:12.668151    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.668156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.668160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.669612    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.789876    3774 request.go:629] Waited for 119.652546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789922    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789939    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.789951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.789958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.792981    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.793487    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.793499    3774 pod_ready.go:81] duration metric: took 125.375312ms for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.793518    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.989031    3774 request.go:629] Waited for 195.453141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989166    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.989181    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.989188    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.991803    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.188209    3774 request.go:629] Waited for 195.602163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188275    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188289    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.188295    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.188299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.190503    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.190953    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.190962    3774 pod_ready.go:81] duration metric: took 397.432093ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.190969    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.390284    3774 request.go:629] Waited for 199.267222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390387    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390398    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.390409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.390414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.393500    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.589029    3774 request.go:629] Waited for 194.741072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589060    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589065    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.589074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.589108    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.591724    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.592136    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.592146    3774 pod_ready.go:81] duration metric: took 401.165409ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.592153    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.789047    3774 request.go:629] Waited for 196.700248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789110    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789120    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.789143    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.789150    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.792302    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.988478    3774 request.go:629] Waited for 195.547657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988571    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.988590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.988601    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.991591    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.992155    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.992168    3774 pod_ready.go:81] duration metric: took 400.004283ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.992177    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.188856    3774 request.go:629] Waited for 196.606719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189024    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189035    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.189046    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.189056    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.192692    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.388336    3774 request.go:629] Waited for 195.165082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388469    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388481    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.388492    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.388500    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.392008    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.392470    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.392479    3774 pod_ready.go:81] duration metric: took 400.290042ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.392486    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.589221    3774 request.go:629] Waited for 196.680325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589265    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.589271    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.589276    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.591675    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:14.788243    3774 request.go:629] Waited for 196.189639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788314    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788319    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.788325    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.788338    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.790264    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:14.790682    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.790691    3774 pod_ready.go:81] duration metric: took 398.194597ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.790698    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.989573    3774 request.go:629] Waited for 198.821418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989634    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989642    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.989650    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.989656    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.992325    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.189580    3774 request.go:629] Waited for 196.795704ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189705    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189717    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.189728    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.189736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.193091    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.193587    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.193598    3774 pod_ready.go:81] duration metric: took 402.889494ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.193607    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.388771    3774 request.go:629] Waited for 195.112594ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388925    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388937    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.388951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.388957    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.391994    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.589664    3774 request.go:629] Waited for 197.220516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589694    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589699    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.589737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.589742    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.683154    3774 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0725 10:49:15.683671    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.683684    3774 pod_ready.go:81] duration metric: took 490.064641ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.683693    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.789750    3774 request.go:629] Waited for 106.017908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789823    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789840    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.789847    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.789850    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.791774    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:15.988478    3774 request.go:629] Waited for 196.346928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988570    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.988587    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.988593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.990813    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.991161    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.991171    3774 pod_ready.go:81] duration metric: took 307.468405ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.991178    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.188563    3774 request.go:629] Waited for 197.337332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188614    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188625    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.188638    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.188644    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.191974    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.388347    3774 request.go:629] Waited for 195.920607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388377    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388382    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.388388    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.388392    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.390392    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:16.390667    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.390675    3774 pod_ready.go:81] duration metric: took 399.488144ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.390682    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.588555    3774 request.go:629] Waited for 197.808091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588661    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588673    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.588684    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.588693    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.591719    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.789759    3774 request.go:629] Waited for 197.31897ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789876    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789887    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.789898    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.789919    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.793070    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.793486    3774 pod_ready.go:92] pod "kube-proxy-mvbkh" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.793498    3774 pod_ready.go:81] duration metric: took 402.805905ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.793509    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.989321    3774 request.go:629] Waited for 195.762555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989391    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989399    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.989406    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.989412    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.991782    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.188598    3774 request.go:629] Waited for 196.377768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188628    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188633    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.188682    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.188688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.190695    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:17.190988    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.190998    3774 pod_ready.go:81] duration metric: took 397.476264ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.191012    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.388526    3774 request.go:629] Waited for 197.466794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388597    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388604    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.388613    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.388618    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.391737    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.588483    3774 request.go:629] Waited for 195.44881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588569    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588577    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.588586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.588593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.591164    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.591519    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.591528    3774 pod_ready.go:81] duration metric: took 400.505326ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.591535    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.789192    3774 request.go:629] Waited for 197.613853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789246    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789256    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.789265    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.789271    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.792807    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.989780    3774 request.go:629] Waited for 196.433914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989847    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989853    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.989859    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.989862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.991949    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.992236    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.992245    3774 pod_ready.go:81] duration metric: took 400.700976ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.992253    3774 pod_ready.go:38] duration metric: took 5.404058179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:17.992267    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:49:17.992318    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:49:18.004356    3774 api_server.go:72] duration metric: took 15.250996818s to wait for apiserver process to appear ...
	I0725 10:49:18.004369    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:49:18.004385    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:49:18.008895    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:49:18.008938    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:49:18.008944    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.008958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.008961    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.009486    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:49:18.009545    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:49:18.009554    3774 api_server.go:131] duration metric: took 5.181232ms to wait for apiserver health ...
	I0725 10:49:18.009561    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:49:18.188337    3774 request.go:629] Waited for 178.740534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188382    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188390    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.188435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.188439    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.193593    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.200076    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:49:18.200099    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.200103    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.200106    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.200112    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.200115    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.200119    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.200121    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.200123    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.200127    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.200131    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.200135    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.200139    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.200142    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.200147    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.200151    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.200154    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.200156    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.200160    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.200163    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.200166    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.200170    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.200173    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.200178    3774 system_pods.go:61] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.200181    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.200184    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.200186    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.200193    3774 system_pods.go:74] duration metric: took 190.622361ms to wait for pod list to return data ...
	I0725 10:49:18.200199    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:49:18.388524    3774 request.go:629] Waited for 188.275556ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388557    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388562    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.388570    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.388573    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.390924    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:18.391069    3774 default_sa.go:45] found service account: "default"
	I0725 10:49:18.391078    3774 default_sa.go:55] duration metric: took 190.872598ms for default service account to be created ...
	I0725 10:49:18.391084    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:49:18.588425    3774 request.go:629] Waited for 197.306337ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588458    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588463    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.588469    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.588474    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.593698    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.599145    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:49:18.599162    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.599167    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.599170    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.599175    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.599194    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.599199    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.599204    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.599207    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.599213    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.599216    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.599223    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.599227    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.599232    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.599237    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.599241    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.599245    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.599249    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.599253    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.599272    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.599280    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.599286    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.599291    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.599295    3774 system_pods.go:89] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.599298    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.599301    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.599304    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.599309    3774 system_pods.go:126] duration metric: took 208.21895ms to wait for k8s-apps to be running ...
	I0725 10:49:18.599322    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:49:18.599377    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:49:18.611737    3774 system_svc.go:56] duration metric: took 12.412676ms WaitForService to wait for kubelet
	I0725 10:49:18.611752    3774 kubeadm.go:582] duration metric: took 15.858385641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:49:18.611763    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:49:18.789774    3774 request.go:629] Waited for 177.962916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789887    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789910    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.789924    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.789930    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.793551    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:18.794424    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794438    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794448    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794452    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794455    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794458    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794462    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794465    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794468    3774 node_conditions.go:105] duration metric: took 182.699541ms to run NodePressure ...
	I0725 10:49:18.794476    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:49:18.794495    3774 start.go:255] writing updated cluster config ...
	I0725 10:49:18.818273    3774 out.go:177] 
	I0725 10:49:18.857228    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:18.857312    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.879124    3774 out.go:177] * Starting "ha-485000-m03" control-plane node in "ha-485000" cluster
	I0725 10:49:18.920865    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:49:18.920897    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:49:18.921101    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:49:18.921120    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:49:18.921243    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.922716    3774 start.go:360] acquireMachinesLock for ha-485000-m03: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:49:18.922851    3774 start.go:364] duration metric: took 109.932µs to acquireMachinesLock for "ha-485000-m03"
	I0725 10:49:18.922881    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:49:18.922891    3774 fix.go:54] fixHost starting: m03
	I0725 10:49:18.923315    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:18.923351    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:18.932987    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51938
	I0725 10:49:18.933376    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:18.933781    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:18.933802    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:18.934032    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:18.934154    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:18.934244    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetState
	I0725 10:49:18.934342    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:18.934436    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3293
	I0725 10:49:18.935384    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:18.935414    3774 fix.go:112] recreateIfNeeded on ha-485000-m03: state=Stopped err=<nil>
	I0725 10:49:18.935426    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	W0725 10:49:18.935534    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:49:18.973151    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m03" ...
	I0725 10:49:19.030981    3774 main.go:141] libmachine: (ha-485000-m03) Calling .Start
	I0725 10:49:19.031240    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.031347    3774 main.go:141] libmachine: (ha-485000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid
	I0725 10:49:19.033047    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:19.033059    3774 main.go:141] libmachine: (ha-485000-m03) DBG | pid 3293 is in state "Stopped"
	I0725 10:49:19.033079    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid...
	I0725 10:49:19.033332    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Using UUID 8bec60ab-aefc-4069-8cc1-870073932ec4
	I0725 10:49:19.063128    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Generated MAC f2:df:a:a6:c4:51
	I0725 10:49:19.063160    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:49:19.063291    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063329    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063383    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8bec60ab-aefc-4069-8cc1-870073932ec4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:49:19.063428    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8bec60ab-aefc-4069-8cc1-870073932ec4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:49:19.063456    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:49:19.065029    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Pid is 3803
	I0725 10:49:19.065402    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Attempt 0
	I0725 10:49:19.065431    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.065485    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3803
	I0725 10:49:19.067650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Searching for f2:df:a:a6:c4:51 in /var/db/dhcpd_leases ...
	I0725 10:49:19.067771    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:49:19.067897    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:49:19.067935    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetConfigRaw
	I0725 10:49:19.067949    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:49:19.067969    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:49:19.067982    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:49:19.068002    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found match: f2:df:a:a6:c4:51
	I0725 10:49:19.068014    3774 main.go:141] libmachine: (ha-485000-m03) DBG | IP: 192.169.0.7
	I0725 10:49:19.068702    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:19.069020    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:19.069625    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:49:19.069642    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:19.069816    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:19.069967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:19.070130    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070266    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070381    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:19.070553    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:19.070752    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:19.070765    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:49:19.074618    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:49:19.083650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:49:19.084521    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.084551    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.084569    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.084582    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.479143    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:49:19.479156    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:49:19.594100    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.594116    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.594124    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.594130    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.595151    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:49:19.595161    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:49:25.234237    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:49:25.234303    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:49:25.234325    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:49:25.257972    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:49:54.133594    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:49:54.133609    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133749    3774 buildroot.go:166] provisioning hostname "ha-485000-m03"
	I0725 10:49:54.133758    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133861    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.133950    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.134036    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134136    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134221    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.134384    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.134539    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.134553    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m03 && echo "ha-485000-m03" | sudo tee /etc/hostname
	I0725 10:49:54.200648    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m03
	
	I0725 10:49:54.200663    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.200790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.200879    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.200961    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.201044    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.201180    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.201420    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.201433    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:49:54.263039    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:49:54.263056    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:49:54.263070    3774 buildroot.go:174] setting up certificates
	I0725 10:49:54.263076    3774 provision.go:84] configureAuth start
	I0725 10:49:54.263083    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.263216    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:54.263306    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.263391    3774 provision.go:143] copyHostCerts
	I0725 10:49:54.263427    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263491    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:49:54.263497    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263638    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:49:54.263837    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263878    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:49:54.263883    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:49:54.264113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264157    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:49:54.264162    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264238    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:49:54.264391    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m03 san=[127.0.0.1 192.169.0.7 ha-485000-m03 localhost minikube]
	I0725 10:49:54.466588    3774 provision.go:177] copyRemoteCerts
	I0725 10:49:54.466634    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:49:54.466649    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.466797    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.466896    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.466976    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.467051    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:54.501149    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:49:54.501228    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:49:54.520581    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:49:54.520648    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:49:54.540217    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:49:54.540294    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:49:54.559440    3774 provision.go:87] duration metric: took 296.351002ms to configureAuth
	I0725 10:49:54.559454    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:49:54.559628    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:54.559646    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:54.559774    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.559865    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.559954    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560037    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.560211    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.560343    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.560351    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:49:54.614691    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:49:54.614704    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:49:54.614776    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:49:54.614790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.614925    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.615019    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615123    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615214    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.615348    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.615499    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.615544    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:49:54.680274    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:49:54.680293    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.680409    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.680496    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680575    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680666    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.680823    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.680965    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.680985    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:49:56.345971    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:49:56.345985    3774 machine.go:97] duration metric: took 37.275852067s to provisionDockerMachine
	I0725 10:49:56.345993    3774 start.go:293] postStartSetup for "ha-485000-m03" (driver="hyperkit")
	I0725 10:49:56.346001    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:49:56.346022    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.346236    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:49:56.346252    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.346356    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.346471    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.346553    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.346642    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.380977    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:49:56.384488    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:49:56.384501    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:49:56.384614    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:49:56.384806    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:49:56.384814    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:49:56.385027    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:49:56.392745    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:56.412613    3774 start.go:296] duration metric: took 66.60995ms for postStartSetup
	I0725 10:49:56.412634    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.412808    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:49:56.412819    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.412903    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.412988    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.413073    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.413150    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.446365    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:49:56.446421    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:49:56.498637    3774 fix.go:56] duration metric: took 37.575244811s for fixHost
	I0725 10:49:56.498667    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.498812    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.498919    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499016    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.499238    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:56.499386    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:56.499396    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:49:56.555439    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929796.646350339
	
	I0725 10:49:56.555451    3774 fix.go:216] guest clock: 1721929796.646350339
	I0725 10:49:56.555456    3774 fix.go:229] Guest: 2024-07-25 10:49:56.646350339 -0700 PDT Remote: 2024-07-25 10:49:56.498656 -0700 PDT m=+90.525587748 (delta=147.694339ms)
	I0725 10:49:56.555467    3774 fix.go:200] guest clock delta is within tolerance: 147.694339ms
	I0725 10:49:56.555470    3774 start.go:83] releasing machines lock for "ha-485000-m03", held for 37.632106624s
	I0725 10:49:56.555487    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.555618    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:56.578528    3774 out.go:177] * Found network options:
	I0725 10:49:56.599071    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0725 10:49:56.620070    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.620097    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.620115    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620743    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.621083    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:49:56.621119    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	W0725 10:49:56.621187    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.621219    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.621317    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:49:56.621327    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621338    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.621512    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621541    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621703    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.621726    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621905    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.621918    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.622026    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	W0725 10:49:56.652986    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:49:56.653049    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:49:56.706201    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:49:56.706215    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.706281    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:56.721415    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:49:56.730471    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:49:56.739423    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:49:56.739473    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:49:56.748522    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.757654    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:49:56.766745    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.775683    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:49:56.785223    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:49:56.794261    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:49:56.803167    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:49:56.812374    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:49:56.820684    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:49:56.828942    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:56.928553    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:49:56.947129    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.947197    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:49:56.959097    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.970776    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:49:56.984568    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.994746    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.005192    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:49:57.026508    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.037792    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:57.052754    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:49:57.055697    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:49:57.062771    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:49:57.076283    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:49:57.171356    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:49:57.271094    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:49:57.271118    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:49:57.288365    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:57.388283    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:59.699253    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.310915582s)
	I0725 10:49:59.699313    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:59.710426    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:59.721257    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:59.814380    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:59.915914    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.023205    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:50:00.037086    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:50:00.048065    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.150126    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:50:00.211936    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:50:00.212049    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:50:00.216805    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:50:00.216881    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:50:00.220095    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:50:00.244551    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:50:00.244627    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.264059    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.304819    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:50:00.346760    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:50:00.368834    3774 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0725 10:50:00.394661    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:50:00.395004    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:50:00.399400    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:00.409863    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:50:00.410047    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:00.410280    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.410302    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.419259    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51960
	I0725 10:50:00.419623    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.419945    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.419955    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.420150    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.420256    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:50:00.420338    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:00.420423    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:50:00.421353    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:50:00.421593    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.421616    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.430525    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51962
	I0725 10:50:00.430861    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.431198    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.431211    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.431432    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.431553    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:50:00.431647    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.7
	I0725 10:50:00.431652    3774 certs.go:194] generating shared ca certs ...
	I0725 10:50:00.431664    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:50:00.431829    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:50:00.431909    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:50:00.431917    3774 certs.go:256] generating profile certs ...
	I0725 10:50:00.432022    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:50:00.432138    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.cfc1c64d
	I0725 10:50:00.432211    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:50:00.432218    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:50:00.432239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:50:00.432260    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:50:00.432278    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:50:00.432295    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:50:00.432331    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:50:00.432368    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:50:00.432392    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:50:00.432488    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:50:00.432538    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:50:00.432546    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:50:00.432580    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:50:00.432612    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:50:00.432641    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:50:00.432707    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:00.432744    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.432766    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.432786    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.432812    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:50:00.432904    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:50:00.432984    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:50:00.433067    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:50:00.433144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:50:00.457759    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 10:50:00.461662    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:50:00.470495    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 10:50:00.474031    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:50:00.483513    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:50:00.486473    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:50:00.495156    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:50:00.498306    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:50:00.507713    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:50:00.510936    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:50:00.519399    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 10:50:00.523489    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:50:00.532193    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:50:00.552954    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:50:00.573061    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:50:00.593555    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:50:00.613482    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:50:00.633390    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:50:00.653522    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:50:00.673721    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:50:00.693637    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:50:00.713744    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:50:00.733957    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:50:00.753667    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:50:00.767462    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:50:00.781289    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:50:00.795165    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:50:00.808987    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:50:00.823098    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:50:00.836829    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:50:00.850678    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:50:00.854970    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:50:00.863536    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867064    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867101    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.871309    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:50:00.879945    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:50:00.888535    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892087    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892134    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.896459    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:50:00.905152    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:50:00.913558    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917014    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917052    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.921381    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:50:00.929938    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:50:00.933439    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:50:00.937823    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:50:00.942083    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:50:00.946333    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:50:00.950751    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:50:00.954972    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:50:00.959334    3774 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0725 10:50:00.959393    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:50:00.959413    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:50:00.959451    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:50:00.974500    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:50:00.974542    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:50:00.974599    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:50:00.983391    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:50:00.983448    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:50:00.992451    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:50:01.006435    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:50:01.020173    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:50:01.034556    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:50:01.037605    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:01.047456    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.141578    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.156504    3774 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:50:01.156711    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:01.178105    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:50:01.219418    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.336829    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.353501    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:50:01.353708    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:50:01.353744    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:50:01.353906    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.353944    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:01.353949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.353955    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.353958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.356860    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.357167    3774 node_ready.go:49] node "ha-485000-m03" has status "Ready":"True"
	I0725 10:50:01.357178    3774 node_ready.go:38] duration metric: took 3.262682ms for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.357193    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:01.357239    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:01.357246    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.357252    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.357257    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.362406    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:01.367672    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:01.367737    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.367747    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.367754    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.367758    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.370874    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:01.372501    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.372645    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.372667    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.372873    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.375539    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.868232    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.868248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.868255    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.868258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.870654    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.871201    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.871209    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.871215    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.871218    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.873125    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:02.368740    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.368762    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.368777    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.368783    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.377869    3774 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0725 10:50:02.379271    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.379283    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.379290    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.379293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.385746    3774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 10:50:02.869348    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.869364    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.869371    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.869375    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.871499    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:02.872136    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.872144    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.872150    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.872155    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.874538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.367877    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.367889    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.367896    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.367899    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.371090    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:03.371735    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.371743    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.371750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.371753    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.374291    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.374875    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:03.869639    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.869654    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.869661    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.869665    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.871755    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.872260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.872269    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.872275    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.872280    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.874379    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.369638    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.369657    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.369703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.369708    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.372772    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:04.373269    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.373277    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.373282    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.373299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.375712    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.869487    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.869502    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.869508    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.869512    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.871992    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.872445    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.872453    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.872459    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.872463    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.874498    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.368695    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.368711    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.368717    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.368721    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.370818    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.371324    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.371336    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.371342    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.371346    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.373135    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.869401    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.869464    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.869473    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.869478    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.871690    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.872113    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.872121    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.872153    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.872158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.874010    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.874326    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:06.369093    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.369156    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.369165    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.369170    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372202    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:06.372616    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.372624    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.372630    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372639    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.374312    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:06.869508    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.869524    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.869531    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.869536    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.872066    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:06.872572    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.872580    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.872586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.872589    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.874648    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.368133    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.368154    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.368178    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.368182    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.370664    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.371146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.371153    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.371158    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.371165    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.372921    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.868556    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.868570    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.868576    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.868580    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.870520    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.871027    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.871035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.871041    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.871044    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.872683    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.368226    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.368242    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.368249    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.368253    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.370410    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.370844    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.370852    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.370858    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.370862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.372615    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.373006    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:08.868118    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.868176    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.868187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.868194    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.870430    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.870901    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.870909    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.870915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.870919    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.872656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.368941    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.368959    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.368966    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.368970    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.371000    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.371430    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.371438    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.371444    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.371447    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.372991    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.868002    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.868047    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.868058    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.868062    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.870388    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.870824    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.870832    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.870838    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.870842    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.872677    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.369049    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.369064    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.369071    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.369074    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.371043    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.371476    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.371483    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.371489    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.371492    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.372986    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.373378    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:10.869188    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.869207    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.869244    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.869251    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.871587    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:10.872055    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.872063    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.872068    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.872071    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.873650    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.368644    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:11.368660    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.368670    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.368674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.370971    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.371396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.371404    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.371410    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.371414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.373238    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.373609    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.373619    3774 pod_ready.go:81] duration metric: took 10.005797843s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373657    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373710    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:50:11.373716    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.373722    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.373728    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.375656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.376026    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.376035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.376041    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.376044    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378143    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.378749    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.378758    3774 pod_ready.go:81] duration metric: took 5.088497ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378765    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378806    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:50:11.378810    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.378816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.380851    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.381363    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.381371    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.381377    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.381381    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.383335    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.383692    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.383702    3774 pod_ready.go:81] duration metric: took 4.931732ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383708    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383749    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:50:11.383754    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.383760    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.383764    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.385637    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.386081    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:11.386088    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.386094    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.386097    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.388000    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.388355    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.388366    3774 pod_ready.go:81] duration metric: took 4.652083ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388374    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388416    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.388421    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.388427    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.388431    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.390189    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.390609    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.390617    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.390622    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.390633    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.392651    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.889083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.889128    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.889138    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.889143    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.891738    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.892209    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.892216    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.892221    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.892223    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.893988    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.390496    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.390512    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.390518    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.390521    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.392470    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.392850    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.392858    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.392864    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.392869    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.394421    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.889565    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.889659    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.889673    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.889678    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.892819    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:12.893389    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.893400    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.893409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.893414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.895094    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.389794    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.389809    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.389816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.389820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.391840    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.392224    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.392231    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.392237    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.392241    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.393832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.394204    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:13.889796    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.889812    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.889823    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.892206    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.892725    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.892732    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.892737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.892747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.894556    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.390135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.390150    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.390156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.390160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.392174    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:14.392583    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.392590    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.392596    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.392612    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.394268    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.889560    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.889573    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.889579    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.889582    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.891400    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.891841    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.891848    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.891854    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.891860    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.893620    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.388714    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.388793    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.388820    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.388827    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.391756    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:15.392117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.392125    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.392134    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.392139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.393815    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.890117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.890137    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.890149    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.890157    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.893399    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:15.894228    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.894235    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.894241    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.894245    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.895912    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.896234    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:16.389894    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.389912    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.389920    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.389924    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.391955    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:16.392480    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.392488    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.392493    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.392505    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.394310    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:16.889781    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.889806    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.889826    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.893556    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:16.894315    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.894326    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.894333    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.894337    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.896126    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.388575    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.388587    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.388602    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.388605    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.390595    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.391157    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.391165    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.391170    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.391173    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.392829    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.889351    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.889367    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.889373    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.889376    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.891538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:17.891973    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.891981    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.891987    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.891990    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.893775    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.389267    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.389290    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.389304    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.389312    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.392450    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:18.392859    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.392867    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.392872    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.392875    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.394522    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.394948    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:18.890099    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.890111    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.890118    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.890121    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.892565    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:18.892967    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.892975    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.892981    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.892985    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.894937    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.389827    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.389924    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.389936    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.389942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.392795    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.393525    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.393536    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.393544    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.393553    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.395412    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.889931    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.889949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.889958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.889962    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.892495    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.893008    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.893015    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.893021    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.893024    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.894590    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.390037    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.390057    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.390068    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.390074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.393277    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:20.393997    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.394008    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.394016    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.394021    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.395790    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.396084    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:20.889112    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.889127    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.889135    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.889139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.891727    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.892142    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.892149    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.892155    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.892158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.893897    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.894418    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.894427    3774 pod_ready.go:81] duration metric: took 9.505922344s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894440    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894470    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:50:20.894475    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.894481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.894485    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.896512    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.897090    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.897097    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.897103    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.897106    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.898896    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.899262    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.899272    3774 pod_ready.go:81] duration metric: took 4.826573ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899278    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899309    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:50:20.899314    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.899319    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.899324    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.901127    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.901676    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:20.901683    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.901689    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.901692    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.903167    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.903492    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.903501    3774 pod_ready.go:81] duration metric: took 4.217035ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903507    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903535    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:50:20.903539    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.903548    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.903554    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.905060    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.905423    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.905430    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.905435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.905438    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.906893    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.907231    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.907242    3774 pod_ready.go:81] duration metric: took 3.730011ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907249    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907283    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:50:20.907288    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.907293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.907298    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.908832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.909193    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.909200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.909206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.909211    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.910822    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.911148    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.911157    3774 pod_ready.go:81] duration metric: took 3.903336ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.911164    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.090249    3774 request.go:629] Waited for 179.043752ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090326    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090337    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.090348    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.090357    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.093923    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:21.289919    3774 request.go:629] Waited for 195.572332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289952    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289956    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.289963    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.289968    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.292331    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.292914    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.292923    3774 pod_ready.go:81] duration metric: took 381.74891ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.292930    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.490166    3774 request.go:629] Waited for 197.198888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490242    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.490254    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.490258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.492317    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.690619    3774 request.go:629] Waited for 197.80318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690692    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690700    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.690707    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.690712    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.693156    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.693612    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.693621    3774 pod_ready.go:81] duration metric: took 400.680496ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.693628    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.889496    3774 request.go:629] Waited for 195.831751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889578    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889584    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.889590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.889594    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.891941    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.089475    3774 request.go:629] Waited for 196.98259ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089532    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089542    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.089554    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.089562    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.092579    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.093142    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.093152    3774 pod_ready.go:81] duration metric: took 399.51252ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.093159    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.289233    3774 request.go:629] Waited for 195.965994ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289304    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289317    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.289329    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.289336    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.292489    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.491051    3774 request.go:629] Waited for 197.933507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491147    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.491172    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.491181    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.494764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.495189    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.495199    3774 pod_ready.go:81] duration metric: took 402.028626ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.495206    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.690648    3774 request.go:629] Waited for 195.401863ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690726    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690734    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.690741    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.690747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.693258    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.889677    3774 request.go:629] Waited for 195.917412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889724    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889761    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.889767    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.889773    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.892446    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.892903    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.892913    3774 pod_ready.go:81] duration metric: took 397.69615ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.892920    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.090761    3774 request.go:629] Waited for 197.803166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090814    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090819    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.090826    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.090831    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.093064    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.290673    3774 request.go:629] Waited for 196.835784ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290721    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290730    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.290752    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.290763    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.293787    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:23.294164    3774 pod_ready.go:97] node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294178    3774 pod_ready.go:81] duration metric: took 401.248534ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	E0725 10:50:23.294187    3774 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294199    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.490423    3774 request.go:629] Waited for 196.114187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490462    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490468    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.490475    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.490481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.493175    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.689290    3774 request.go:629] Waited for 195.78427ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689415    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689424    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.689435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.689442    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.692307    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.692985    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:23.692998    3774 pod_ready.go:81] duration metric: took 398.785835ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.693009    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.890957    3774 request.go:629] Waited for 197.902466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891038    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891044    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.891050    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.891055    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.893361    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.090079    3774 request.go:629] Waited for 196.359986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090164    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090175    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.090187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.090195    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.094081    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.094638    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.094651    3774 pod_ready.go:81] duration metric: took 401.630539ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.094660    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.290893    3774 request.go:629] Waited for 196.185136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291015    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291026    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.291038    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.291045    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.294065    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.489582    3774 request.go:629] Waited for 194.956133ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489621    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489639    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.489681    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.489688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.492299    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.492641    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.492651    3774 pod_ready.go:81] duration metric: took 397.980834ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.492659    3774 pod_ready.go:38] duration metric: took 23.135149255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:24.492671    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:50:24.492734    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:50:24.505745    3774 api_server.go:72] duration metric: took 23.348906591s to wait for apiserver process to appear ...
	I0725 10:50:24.505757    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:50:24.505768    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:50:24.508893    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:50:24.508926    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:50:24.508931    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.508937    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.508942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.509529    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:50:24.509577    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:50:24.509588    3774 api_server.go:131] duration metric: took 3.825788ms to wait for apiserver health ...
	I0725 10:50:24.509593    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:50:24.689653    3774 request.go:629] Waited for 180.026101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689691    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689696    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.689703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.689707    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.694162    3774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 10:50:24.699605    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:50:24.699621    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:24.699626    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:24.699629    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:24.699632    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:24.699635    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:24.699638    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:24.699641    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:24.699644    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:24.699647    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:24.699651    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:24.699654    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:24.699657    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:24.699660    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:24.699662    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:24.699665    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:24.699668    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:24.699670    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:24.699672    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:24.699676    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:24.699680    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:24.699682    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:24.699685    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:24.699687    3774 system_pods.go:61] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:24.699690    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:24.699692    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:24.699697    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:24.699701    3774 system_pods.go:74] duration metric: took 190.102528ms to wait for pod list to return data ...
	I0725 10:50:24.699707    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:50:24.890642    3774 request.go:629] Waited for 190.892646ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890727    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890735    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.890743    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.890750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.893133    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.893199    3774 default_sa.go:45] found service account: "default"
	I0725 10:50:24.893209    3774 default_sa.go:55] duration metric: took 193.493853ms for default service account to be created ...
	I0725 10:50:24.893214    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:50:25.089856    3774 request.go:629] Waited for 196.580095ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089959    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089969    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.089980    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.089989    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.095665    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:25.100358    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:50:25.100371    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:25.100375    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:25.100378    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:25.100382    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:25.100386    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:25.100389    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:25.100393    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:25.100396    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:25.100400    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:25.100415    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:25.100424    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:25.100429    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:25.100435    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:25.100453    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:25.100457    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:25.100460    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:25.100465    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:25.100469    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:25.100472    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:25.100476    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:25.100481    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:25.100485    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:25.100489    3774 system_pods.go:89] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:25.100492    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:25.100495    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:25.100501    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:25.100509    3774 system_pods.go:126] duration metric: took 207.289229ms to wait for k8s-apps to be running ...
	I0725 10:50:25.100516    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:50:25.100565    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:50:25.111946    3774 system_svc.go:56] duration metric: took 11.425609ms WaitForService to wait for kubelet
	I0725 10:50:25.111960    3774 kubeadm.go:582] duration metric: took 23.955115877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:50:25.111982    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:50:25.290128    3774 request.go:629] Waited for 178.101329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290194    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.290206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.290210    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.292310    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:25.293114    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293124    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293137    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293141    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293144    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293147    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293150    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293153    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293156    3774 node_conditions.go:105] duration metric: took 181.167189ms to run NodePressure ...
	I0725 10:50:25.293164    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:50:25.293180    3774 start.go:255] writing updated cluster config ...
	I0725 10:50:25.316414    3774 out.go:177] 
	I0725 10:50:25.353963    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:25.354081    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.376345    3774 out.go:177] * Starting "ha-485000-m04" worker node in "ha-485000" cluster
	I0725 10:50:25.418264    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:50:25.418295    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:50:25.418479    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:50:25.418492    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:50:25.418579    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.419248    3774 start.go:360] acquireMachinesLock for ha-485000-m04: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:50:25.419314    3774 start.go:364] duration metric: took 49.541µs to acquireMachinesLock for "ha-485000-m04"
	I0725 10:50:25.419336    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:50:25.419342    3774 fix.go:54] fixHost starting: m04
	I0725 10:50:25.419646    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:25.419671    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:25.428557    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51966
	I0725 10:50:25.428876    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:25.429185    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:25.429196    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:25.429408    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:25.429520    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.429598    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:50:25.429683    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.429771    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3386
	I0725 10:50:25.430679    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid 3386 missing from process table
	I0725 10:50:25.430699    3774 fix.go:112] recreateIfNeeded on ha-485000-m04: state=Stopped err=<nil>
	I0725 10:50:25.430707    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	W0725 10:50:25.430787    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:50:25.451265    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m04" ...
	I0725 10:50:25.492428    3774 main.go:141] libmachine: (ha-485000-m04) Calling .Start
	I0725 10:50:25.492574    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.492592    3774 main.go:141] libmachine: (ha-485000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid
	I0725 10:50:25.492648    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Using UUID c1175b3a-154e-40e8-a691-44a5e1615e54
	I0725 10:50:25.517781    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Generated MAC ba:e9:ef:e5:fe:75
	I0725 10:50:25.517800    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:50:25.517982    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518030    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518090    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c1175b3a-154e-40e8-a691-44a5e1615e54", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:50:25.518138    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c1175b3a-154e-40e8-a691-44a5e1615e54 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:50:25.518152    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:50:25.519588    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Pid is 3811
	I0725 10:50:25.520056    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Attempt 0
	I0725 10:50:25.520068    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.520140    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3811
	I0725 10:50:25.521299    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Searching for ba:e9:ef:e5:fe:75 in /var/db/dhcpd_leases ...
	I0725 10:50:25.521358    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:50:25.521373    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e1a8}
	I0725 10:50:25.521401    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:50:25.521437    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:50:25.521449    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:50:25.521464    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found match: ba:e9:ef:e5:fe:75
	I0725 10:50:25.521477    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetConfigRaw
	I0725 10:50:25.521524    3774 main.go:141] libmachine: (ha-485000-m04) DBG | IP: 192.169.0.8
	I0725 10:50:25.522369    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:25.522631    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.523230    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:50:25.523241    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.523348    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:25.523441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:25.523528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:25.523847    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:25.524050    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:25.524062    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:50:25.527797    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:50:25.536120    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:50:25.537142    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:25.537161    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:25.537173    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:25.537185    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:25.927659    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:50:25.927675    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:50:26.042400    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:26.042420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:26.042429    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:26.042435    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:26.043251    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:50:26.043265    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:50:31.642036    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:50:31.642050    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:50:31.642059    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:50:31.665420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:50:36.584591    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:50:36.584607    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584735    3774 buildroot.go:166] provisioning hostname "ha-485000-m04"
	I0725 10:50:36.584758    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584843    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.584928    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.585028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585116    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.585343    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.585486    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.585494    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m04 && echo "ha-485000-m04" | sudo tee /etc/hostname
	I0725 10:50:36.650595    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m04
	
	I0725 10:50:36.650612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.650747    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.650856    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.650943    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.651028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.651142    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.651292    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.651304    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:50:36.714348    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:50:36.714365    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:50:36.714375    3774 buildroot.go:174] setting up certificates
	I0725 10:50:36.714381    3774 provision.go:84] configureAuth start
	I0725 10:50:36.714387    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.714525    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:36.714638    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.714724    3774 provision.go:143] copyHostCerts
	I0725 10:50:36.714755    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714823    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:50:36.714829    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:50:36.715183    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715234    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:50:36.715240    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715334    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:50:36.715487    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715532    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:50:36.715542    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715621    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:50:36.715776    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m04 san=[127.0.0.1 192.169.0.8 ha-485000-m04 localhost minikube]
	I0725 10:50:36.928446    3774 provision.go:177] copyRemoteCerts
	I0725 10:50:36.928501    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:50:36.928518    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.928667    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.928766    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.928853    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.928937    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:36.963525    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:50:36.963602    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:50:36.983132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:50:36.983215    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 10:50:37.002467    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:50:37.002541    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:50:37.021539    3774 provision.go:87] duration metric: took 307.147237ms to configureAuth
	I0725 10:50:37.021553    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:50:37.021739    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:37.021752    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:37.021873    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.021953    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.022028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022114    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022191    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.022294    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.022425    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.022432    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:50:37.077868    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:50:37.077881    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:50:37.077955    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:50:37.077968    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.078088    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.078184    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078265    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078353    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.078490    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.078627    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.078675    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:50:37.144185    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:50:37.144203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.144342    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.144434    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144523    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144614    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.144742    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.144915    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.144928    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:50:38.716011    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:50:38.716027    3774 machine.go:97] duration metric: took 13.192614029s to provisionDockerMachine
	I0725 10:50:38.716035    3774 start.go:293] postStartSetup for "ha-485000-m04" (driver="hyperkit")
	I0725 10:50:38.716042    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:50:38.716057    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.716243    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:50:38.716257    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.716357    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.716441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.716528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.716625    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.757562    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:50:38.761046    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:50:38.761061    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:50:38.761168    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:50:38.761354    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:50:38.761360    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:50:38.761571    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:50:38.769992    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:38.800110    3774 start.go:296] duration metric: took 84.064957ms for postStartSetup
	I0725 10:50:38.800132    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.800311    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:50:38.800325    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.800410    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.800488    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.800581    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.800667    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.835801    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:50:38.835861    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:50:38.889497    3774 fix.go:56] duration metric: took 13.469972078s for fixHost
	I0725 10:50:38.889527    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.889666    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.889774    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889955    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.890084    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:38.890230    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:38.890238    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:50:38.946246    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929837.991587228
	
	I0725 10:50:38.946261    3774 fix.go:216] guest clock: 1721929837.991587228
	I0725 10:50:38.946267    3774 fix.go:229] Guest: 2024-07-25 10:50:37.991587228 -0700 PDT Remote: 2024-07-25 10:50:38.889513 -0700 PDT m=+132.915879826 (delta=-897.925772ms)
	I0725 10:50:38.946278    3774 fix.go:200] guest clock delta is within tolerance: -897.925772ms
	I0725 10:50:38.946282    3774 start.go:83] releasing machines lock for "ha-485000-m04", held for 13.526780386s
	I0725 10:50:38.946300    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.946427    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:38.970830    3774 out.go:177] * Found network options:
	I0725 10:50:38.991565    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0725 10:50:39.012683    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012705    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012716    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.012729    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013273    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013419    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013516    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:50:39.013540    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	W0725 10:50:39.013573    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013595    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013611    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.013662    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013696    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:50:39.013711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:39.013838    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014008    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014022    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014127    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:39.014253    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	W0725 10:50:39.045644    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:50:39.045702    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:50:39.092083    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:50:39.092097    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.092166    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.107318    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:50:39.116414    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:50:39.125304    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.125351    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:50:39.134448    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.143660    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:50:39.152627    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.161628    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:50:39.170919    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:50:39.179944    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:50:39.189118    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:50:39.198245    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:50:39.206354    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:50:39.214527    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.316588    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:50:39.336368    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.336436    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:50:39.358127    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.374161    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:50:39.391711    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.402592    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.413153    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:50:39.435734    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.446272    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.461753    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:50:39.464748    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:50:39.475362    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:50:39.488985    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:50:39.584434    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:50:39.693634    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.693658    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:50:39.707727    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.811725    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:51:40.826240    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01368475s)
	I0725 10:51:40.826323    3774 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 10:51:40.860544    3774 out.go:177] 
	W0725 10:51:40.881235    3774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 17:50:36 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491525254Z" level=info msg="Starting up"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491953665Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.492498920Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.507900444Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.528985080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529119487Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529190184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529227367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529382906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529449495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529613424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529662631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529706898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529742376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529932288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.530230915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531799836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531853394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532013423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532060003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532224150Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532284958Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533667564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533767716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533813069Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533911547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533958502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534060695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534298445Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534406555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534446041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534478140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534510008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534540807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534571096Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534602037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534636987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534677965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534711495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534740942Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534776402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534811869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534850836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534886582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534922068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534956302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534986255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535016075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535048332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535088208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535121326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535150501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535236221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535275301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535313456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535345038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535373791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535440294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535482913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535628751Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535672410Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535703274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535734362Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535762669Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535960514Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536080093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536142938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536179093Z" level=info msg="containerd successfully booted in 0.029080s"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.510927923Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.523832948Z" level=info msg="Loading containers: start."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.618418659Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.680635969Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.724258639Z" level=info msg="Loading containers: done."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734502052Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734720064Z" level=info msg="Daemon has completed initialization"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.758872412Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.759079256Z" level=info msg="API listen on [::]:2376"
	Jul 25 17:50:37 ha-485000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.869455528Z" level=info msg="Processing signal 'terminated'"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870413697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870828965Z" level=info msg="Daemon shutdown complete"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870895825Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870897244Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 17:50:38 ha-485000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 dockerd[1172]: time="2024-07-25T17:50:39.901687066Z" level=info msg="Starting up"
	Jul 25 17:51:40 ha-485000-m04 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 10:51:40.881304    3774 out.go:239] * 
	W0725 10:51:40.882170    3774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:51:40.944200    3774 out.go:177] 
	
	
	==> Docker <==
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.054813784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059017705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059083409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059096144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059214610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.069300205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.070935517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.071147226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.071268178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125259400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125463894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125566513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125736677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267673296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267738318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267747857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267820408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:50:02 ha-485000 dockerd[1157]: time="2024-07-25T17:50:02.379012546Z" level=info msg="ignoring event" container=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380251166Z" level=info msg="shim disconnected" id=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a namespace=moby
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380604445Z" level=warning msg="cleaning up after shim disconnected" id=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a namespace=moby
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380651459Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136247496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136372896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136388045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.138147412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5090fc1b1203b       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   b4e0e7e1de4e1       storage-provisioner
	a9a19bab6ef80       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                1                   fe4b8e6e60c09       kube-proxy-9w8bj
	09f6510c1f42c       6f1d07c71fa0f                                                                                         2 minutes ago        Running             kindnet-cni               1                   e8f3399f13f16       kindnet-bhkpj
	a31dd6f7c9844       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   9fea40676b8e3       coredns-7db6d8ff4d-dv8wr
	e4820d567662e       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   8af4e93204e34       busybox-fc5497c4f-zq4hj
	33197b74cd7a5       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   b4e0e7e1de4e1       storage-provisioner
	b1634e3371c38       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   f5370b1e66d0a       coredns-7db6d8ff4d-pnmm6
	b2f637b4a6f7e       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   2                   5a37933ef701f       kube-controller-manager-ha-485000
	c5fb9f8438921       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  0                   791f7e270cf81       kube-vip-ha-485000
	00f3c5f6f1b53       76932a3b37d7e                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   5a37933ef701f       kube-controller-manager-ha-485000
	904a632ce7278       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   aa35809cc9a2e       etcd-ha-485000
	5133be0bb8f02       3edc18e7b7672                                                                                         2 minutes ago        Running             kube-scheduler            1                   82b16c3ebba48       kube-scheduler-ha-485000
	bb09ac23fb5c5       1f6d574d502f3                                                                                         2 minutes ago        Running             kube-apiserver            1                   756d86d4401d1       kube-apiserver-ha-485000
	2fb2739ec04ab       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   bb72d6822c0fe       busybox-fc5497c4f-zq4hj
	a05e339cb9497       cbb01a7bd410d                                                                                         7 minutes ago        Exited              coredns                   0                   fc9af6795be55       coredns-7db6d8ff4d-pnmm6
	03e08b86c39eb       cbb01a7bd410d                                                                                         7 minutes ago        Exited              coredns                   0                   b09502869c625       coredns-7db6d8ff4d-dv8wr
	59bc560fbb478       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              8 minutes ago        Exited              kindnet-cni               0                   653a8381d2daf       kindnet-bhkpj
	f34917d25cfb4       55bb025d2cfa5                                                                                         8 minutes ago        Exited              kube-proxy                0                   5e7d6ddf78ead       kube-proxy-9w8bj
	37dcd3b2e16e9       3861cfcd7c04c                                                                                         8 minutes ago        Exited              etcd                      0                   dd6c687a8ef70       etcd-ha-485000
	d070da633e824       1f6d574d502f3                                                                                         8 minutes ago        Exited              kube-apiserver            0                   e3da2073892ac       kube-apiserver-ha-485000
	616226aa67d06       3edc18e7b7672                                                                                         8 minutes ago        Exited              kube-scheduler            0                   5cf9e1c76cde6       kube-scheduler-ha-485000
	
	
	==> coredns [03e08b86c39e] <==
	[INFO] 10.244.1.2:47982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126089s
	[INFO] 10.244.1.2:51014 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118313s
	[INFO] 10.244.1.2:50008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011365s
	[INFO] 10.244.1.2:32909 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082624s
	[INFO] 10.244.0.4:51190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000474846s
	[INFO] 10.244.0.4:33582 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106891s
	[INFO] 10.244.0.4:41006 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131032s
	[INFO] 10.244.0.4:51357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075568s
	[INFO] 10.244.2.2:36960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121945s
	[INFO] 10.244.2.2:33774 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093868s
	[INFO] 10.244.1.2:53754 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143927s
	[INFO] 10.244.1.2:48688 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063165s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000211004s
	[INFO] 10.244.0.4:37456 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036138s
	[INFO] 10.244.0.4:56948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084971s
	[INFO] 10.244.2.2:47405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154209s
	[INFO] 10.244.2.2:52677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123121s
	[INFO] 10.244.1.2:33651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136058s
	[INFO] 10.244.1.2:35440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113476s
	[INFO] 10.244.1.2:55517 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000157954s
	[INFO] 10.244.0.4:50721 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102424s
	[INFO] 10.244.0.4:35575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108877s
	[INFO] 10.244.0.4:46484 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000032004s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a05e339cb949] <==
	[INFO] 10.244.2.2:58701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089843s
	[INFO] 10.244.2.2:45510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123323s
	[INFO] 10.244.2.2:51099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121589s
	[INFO] 10.244.2.2:34142 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081501s
	[INFO] 10.244.2.2:53658 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114171s
	[INFO] 10.244.2.2:42295 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137781s
	[INFO] 10.244.1.2:43309 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000505919s
	[INFO] 10.244.1.2:45207 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000056892s
	[INFO] 10.244.1.2:59026 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037705s
	[INFO] 10.244.1.2:38175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000360332s
	[INFO] 10.244.0.4:34324 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059911s
	[INFO] 10.244.0.4:56363 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000234456s
	[INFO] 10.244.0.4:45101 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172739s
	[INFO] 10.244.0.4:55137 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050682s
	[INFO] 10.244.2.2:40287 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066074s
	[INFO] 10.244.2.2:34939 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040408s
	[INFO] 10.244.1.2:39138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069486s
	[INFO] 10.244.1.2:43850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103193s
	[INFO] 10.244.0.4:60304 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000059394s
	[INFO] 10.244.2.2:48042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111442s
	[INFO] 10.244.2.2:55217 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000131122s
	[INFO] 10.244.1.2:60146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013068s
	[INFO] 10.244.0.4:54404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100673s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a31dd6f7c984] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48270 - 2612 "HINFO IN 477576417202223145.7395415910525641618. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010832063s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1813733051]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.339) (total time: 30002ms):
	Trace[1813733051]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.340)
	Trace[1813733051]: [30.002590373s] [30.002590373s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[385183908]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30002ms):
	Trace[385183908]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.342)
	Trace[385183908]: [30.002579942s] [30.002579942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1402818788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.339) (total time: 30004ms):
	Trace[1402818788]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.340)
	Trace[1402818788]: [30.004004021s] [30.004004021s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b1634e3371c3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52414 - 10418 "HINFO IN 8033717330104741663.4370907126619795494. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010811934s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[924906424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30001ms):
	Trace[924906424]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:50:02.341)
	Trace[924906424]: [30.001155479s] [30.001155479s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1854012052]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30001ms):
	Trace[1854012052]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.342)
	Trace[1854012052]: [30.001703258s] [30.001703258s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[364532444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30003ms):
	Trace[364532444]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:50:02.342)
	Trace[364532444]: [30.003093498s] [30.003093498s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-485000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T10_43_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-485000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 86b27484c38a4d51a4045582f517cb4e
	  System UUID:                6bc54fb5-0000-0000-b22e-c84ce3b6b1d3
	  Boot ID:                    62867363-c68f-485a-a817-570e934bdef6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zq4hj              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 coredns-7db6d8ff4d-dv8wr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m14s
	  kube-system                 coredns-7db6d8ff4d-pnmm6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m14s
	  kube-system                 etcd-ha-485000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-bhkpj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m14s
	  kube-system                 kube-apiserver-ha-485000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-ha-485000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-proxy-9w8bj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-scheduler-ha-485000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-vip-ha-485000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m27s                  kubelet          Node ha-485000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet          Node ha-485000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m27s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m15s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  NodeReady                7m55s                  kubelet          Node ha-485000 status is now: NodeReady
	  Normal  RegisteredNode           6m59s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           5m47s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node ha-485000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node ha-485000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           84s                    node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	
	
	Name:               ha-485000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T10_44_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:44:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-485000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 074970d7a4814e46ae2a22d0bd0e4ff6
	  System UUID:                528f4ab7-0000-0000-922b-886237fb4fc4
	  Boot ID:                    20d7daee-b931-421b-a580-3b8bb7dfca58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fmpmr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 etcd-ha-485000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m15s
	  kube-system                 kindnet-mvblc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-ha-485000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-controller-manager-ha-485000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-proxy-dc5jq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-ha-485000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-vip-ha-485000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m12s                  kube-proxy       
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 3m59s                  kube-proxy       
	  Normal   Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m15s (x2 over 7m16s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m15s (x2 over 7m16s)  kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m15s (x2 over 7m16s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m10s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           6m59s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   NodeReady                6m57s                  kubelet          Node ha-485000-m02 status is now: NodeReady
	  Normal   RegisteredNode           5m47s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   NodeHasSufficientPID     4m2s                   kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m2s                   kubelet          Node ha-485000-m02 has been rebooted, boot id: 9cf20695-d97a-4263-89d3-0ce013db4ad6
	  Normal   Starting                 4m2s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m2s                   kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m2s                   kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m45s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m40s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m40s)  kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x7 over 2m40s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m17s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           2m8s                   node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	
	
	Name:               ha-485000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T10_45_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:50:01 +0000   Thu, 25 Jul 2024 17:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:50:01 +0000   Thu, 25 Jul 2024 17:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:50:01 +0000   Thu, 25 Jul 2024 17:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:50:01 +0000   Thu, 25 Jul 2024 17:45:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-485000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8b53bbf38b543d5b850150c5769ec64
	  System UUID:                8bec4069-0000-0000-8cc1-870073932ec4
	  Boot ID:                    2ef44dc1-ced8-4904-9887-d1a64e2f490f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4r7sr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 etcd-ha-485000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m2s
	  kube-system                 kindnet-2t428                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-apiserver-ha-485000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-controller-manager-ha-485000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-65n48                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-scheduler-ha-485000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-vip-ha-485000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 98s                  kube-proxy       
	  Normal   Starting                 6m1s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  6m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m4s                 node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   NodeHasSufficientMemory  6m4s (x8 over 6m4s)  kubelet          Node ha-485000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m4s (x8 over 6m4s)  kubelet          Node ha-485000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m4s (x7 over 6m4s)  kubelet          Node ha-485000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m                   node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   RegisteredNode           5m47s                node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   RegisteredNode           3m45s                node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   RegisteredNode           2m17s                node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   RegisteredNode           2m8s                 node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  101s                 kubelet          Node ha-485000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s                 kubelet          Node ha-485000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s                 kubelet          Node ha-485000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 101s                 kubelet          Node ha-485000-m03 has been rebooted, boot id: 2ef44dc1-ced8-4904-9887-d1a64e2f490f
	  Normal   RegisteredNode           84s                  node-controller  Node ha-485000-m03 event: Registered Node ha-485000-m03 in Controller
	
	
	Name:               ha-485000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T10_46_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:47:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-485000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffb85b1f8a234bafbaf23076eaf9ec4c
	  System UUID:                c11740e8-0000-0000-a691-44a5e1615e54
	  Boot ID:                    43ffead3-7e46-4ceb-8092-37bb0c6b4a3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cq6bp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m11s
	  kube-system                 kube-proxy-mvbkh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m12s (x2 over 5m12s)  kubelet          Node ha-485000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m12s (x2 over 5m12s)  kubelet          Node ha-485000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m12s (x2 over 5m12s)  kubelet          Node ha-485000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  NodeReady                4m49s                  kubelet          Node ha-485000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  NodeNotReady             98s                    node-controller  Node ha-485000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           85s                    node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035231] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008075] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691281] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007298] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796480] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.252745] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.283283] systemd-fstab-generator[464]: Ignoring "noauto" option for root device
	[  +0.095770] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +1.941045] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.252394] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.099578] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.103399] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.053926] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.386735] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.102643] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.098759] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.136380] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.447555] systemd-fstab-generator[1564]: Ignoring "noauto" option for root device
	[  +6.725237] kauditd_printk_skb: 168 callbacks suppressed
	[Jul25 17:49] kauditd_printk_skb: 40 callbacks suppressed
	[Jul25 17:50] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [37dcd3b2e16e] <==
	{"level":"info","ts":"2024-07-25T17:48:18.115397Z","caller":"traceutil/trace.go:171","msg":"trace[271686631] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"3.705255846s","start":"2024-07-25T17:48:14.410139Z","end":"2024-07-25T17:48:18.115395Z","steps":["trace[271686631] 'agreement among raft nodes before linearized reading'  (duration: 3.705244532s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:48:18.115407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:48:14.410131Z","time spent":"3.705272729s","remote":"127.0.0.1:55858","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/07/25 17:48:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:48:18.115454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:48:13.072941Z","time spent":"5.042510732s","remote":"127.0.0.1:55972","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/25 17:48:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:48:18.153243Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:48:18.15329Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T17:48:18.154957Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-25T17:48:18.155169Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155183Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155198Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155281Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155307Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155328Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155336Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.15534Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155357Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155612Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155639Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155686Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155716Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.158712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-25T17:48:18.15883Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-25T17:48:18.158858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-485000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [904a632ce727] <==
	{"level":"warn","ts":"2024-07-25T17:49:45.312602Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:45.312927Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:46.387426Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:46.38745Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:49.315155Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:49.315264Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:51.388471Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:51.388501Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:53.316987Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:53.317048Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:56.389309Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:56.389321Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:57.318715Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:49:57.318764Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:50:01.320568Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:50:01.320672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"513c51a7eb0a4980","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:50:01.389431Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:50:01.389456Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"513c51a7eb0a4980","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-25T17:50:02.973448Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:02.973504Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:02.983255Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"513c51a7eb0a4980","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-25T17:50:02.983303Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:02.983517Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:03.001392Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"513c51a7eb0a4980","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-25T17:50:03.001456Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	
	
	==> kernel <==
	 17:51:43 up 3 min,  0 users,  load average: 0.47, 0.37, 0.16
	Linux ha-485000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [09f6510c1f42] <==
	I0725 17:51:13.232892       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:23.232039       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:51:23.232078       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:23.232322       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:23.232350       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:23.232688       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:23.232718       1 main.go:299] handling current node
	I0725 17:51:23.232727       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:23.232731       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:33.224521       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:33.224559       1 main.go:299] handling current node
	I0725 17:51:33.224570       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:33.224575       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:33.224831       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:51:33.224860       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:33.224964       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:33.224994       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:43.233605       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:43.233640       1 main.go:299] handling current node
	I0725 17:51:43.233682       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:43.233718       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:43.233797       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:51:43.233805       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:43.233846       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:43.233852       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [59bc560fbb47] <==
	I0725 17:47:42.711271       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:47:52.709603       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:47:52.709679       1 main.go:299] handling current node
	I0725 17:47:52.709701       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:47:52.709715       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:47:52.709796       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:47:52.709836       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:47:52.709917       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:47:52.709960       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:48:02.715917       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:48:02.716078       1 main.go:299] handling current node
	I0725 17:48:02.716218       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:48:02.716354       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:48:02.716553       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:48:02.716710       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:48:02.716982       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:48:02.717071       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:48:12.708500       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:48:12.708695       1 main.go:299] handling current node
	I0725 17:48:12.709027       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:48:12.709144       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:48:12.709384       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:48:12.709480       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:48:12.709756       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:48:12.709925       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bb09ac23fb5c] <==
	I0725 17:49:12.630251       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 17:49:12.630340       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 17:49:12.630392       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 17:49:12.632677       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:49:12.632839       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:49:12.671271       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0725 17:49:12.671437       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0725 17:49:12.719211       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:49:12.719996       1 policy_source.go:224] refreshing policies
	I0725 17:49:12.723042       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 17:49:12.727001       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 17:49:12.739521       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0725 17:49:12.745300       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 17:49:12.754593       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 17:49:12.755092       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 17:49:12.755122       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 17:49:12.755236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0725 17:49:12.769857       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 17:49:12.771521       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 17:49:12.771589       1 aggregator.go:165] initial CRD sync complete...
	I0725 17:49:12.771606       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 17:49:12.771672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 17:49:12.771740       1 cache.go:39] Caches are synced for autoregister controller
	I0725 17:49:12.810193       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 17:49:13.634621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [d070da633e82] <==
	W0725 17:48:18.133515       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.133533       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.133551       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.135633       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549ea8)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0725 17:48:18.135736       1 trace.go:236] Trace[227145877]: "Get" accept:application/json, */*,audit-id:3ee97e8c-a90b-49a9-bd2d-1daff0338a90,client:192.169.0.5,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (25-Jul-2024 17:48:11.626) (total time: 6509ms):
	Trace[227145877]: [6.509602801s] [6.509602801s] END
	E0725 17:48:18.136018       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549f18)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0725 17:48:18.136121       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549f28)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0725 17:48:18.136277       1 timeout.go:142] post-timeout activity - time-elapsed: 109.945637ms, GET "/readyz" result: <nil>
	I0725 17:48:18.136698       1 trace.go:236] Trace[964619862]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.169.0.5,type:*v1.Endpoints,resource:apiServerIPInfo (25-Jul-2024 17:48:13.107) (total time: 5029ms):
	Trace[964619862]: [5.029134952s] [5.029134952s] END
	I0725 17:48:18.137088       1 trace.go:236] Trace[1328638299]: "Update" accept:application/json, */*,audit-id:845214e4-f445-4918-9288-423a1ea3f222,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (25-Jul-2024 17:48:13.071) (total time: 5065ms):
	Trace[1328638299]: ["GuaranteedUpdate etcd3" audit-id:845214e4-f445-4918-9288-423a1ea3f222,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 5065ms (17:48:13.072)
	Trace[1328638299]:  ---"Txn call failed" err:rpc error: code = Unknown desc = malformed header: missing HTTP content-type 5063ms (17:48:18.136)]
	Trace[1328638299]: [5.065234103s] [5.065234103s] END
	W0725 17:48:18.137275       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.137331       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.139661       1 controller.go:159] unable to sync kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0725 17:48:18.140698       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140768       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140810       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140838       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140862       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.140922       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	I0725 17:48:18.190627       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [00f3c5f6f1b5] <==
	I0725 17:48:52.193509       1 serving.go:380] Generated self-signed cert in-memory
	I0725 17:48:52.581794       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0725 17:48:52.581857       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:48:52.584559       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0725 17:48:52.585376       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:48:52.585632       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:48:52.585875       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0725 17:49:12.699450       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [b2f637b4a6f7] <==
	I0725 17:49:35.031435       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0725 17:49:35.035164       1 shared_informer.go:320] Caches are synced for TTL
	I0725 17:49:35.035187       1 shared_informer.go:320] Caches are synced for GC
	I0725 17:49:35.040773       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0725 17:49:35.046352       1 shared_informer.go:320] Caches are synced for PVC protection
	I0725 17:49:35.092047       1 shared_informer.go:320] Caches are synced for disruption
	I0725 17:49:35.092079       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0725 17:49:35.152149       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0725 17:49:35.163136       1 shared_informer.go:320] Caches are synced for cronjob
	I0725 17:49:35.212985       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:49:35.216861       1 shared_informer.go:320] Caches are synced for job
	I0725 17:49:35.222736       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:49:35.624727       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:49:35.624867       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0725 17:49:35.658652       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:50:02.506695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.397094ms"
	I0725 17:50:02.506924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.822µs"
	I0725 17:50:04.767363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.571098ms"
	I0725 17:50:04.767662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.416µs"
	I0725 17:50:11.453829       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fh5q9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fh5q9\": the object has been modified; please apply your changes to the latest version and try again"
	I0725 17:50:11.456664       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bfa34ec2-d695-424c-89b9-e3c5a5edb82f", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fh5q9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fh5q9": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:50:11.463104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.202966ms"
	E0725 17:50:11.463156       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:50:11.463494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="313.676µs"
	I0725 17:50:11.468346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.454µs"
	
	
	==> kube-proxy [a9a19bab6ef8] <==
	I0725 17:49:32.621289       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:49:32.645667       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0725 17:49:32.691573       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:49:32.691749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:49:32.691811       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:49:32.695182       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:49:32.696010       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:49:32.696165       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:49:32.700031       1 config.go:192] "Starting service config controller"
	I0725 17:49:32.700126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:49:32.700303       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:49:32.700360       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:49:32.702446       1 config.go:319] "Starting node config controller"
	I0725 17:49:32.702527       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:49:32.801433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:49:32.801644       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:49:32.803416       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f34917d25cfb] <==
	I0725 17:43:28.583459       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:43:28.589732       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0725 17:43:28.631529       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:43:28.631610       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:43:28.631661       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:43:28.635368       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:43:28.635648       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:43:28.635902       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:43:28.637590       1 config.go:192] "Starting service config controller"
	I0725 17:43:28.637644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:43:28.637672       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:43:28.637684       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:43:28.638240       1 config.go:319] "Starting node config controller"
	I0725 17:43:28.639127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:43:28.738536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:43:28.738551       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:43:28.739452       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5133be0bb8f0] <==
	I0725 17:48:51.828281       1 serving.go:380] Generated self-signed cert in-memory
	W0725 17:49:02.461664       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0725 17:49:02.461710       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 17:49:02.461716       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 17:49:12.469101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 17:49:12.469301       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:49:12.471867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 17:49:12.471972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 17:49:12.473141       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 17:49:12.471987       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 17:49:12.773323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [616226aa67d0] <==
	E0725 17:46:05.510251       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fmpmr\": pod busybox-fc5497c4f-fmpmr is already assigned to node \"ha-485000-m02\"" pod="default/busybox-fc5497c4f-fmpmr"
	I0725 17:46:05.510297       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fmpmr" node="ha-485000-m02"
	E0725 17:46:32.247841       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d4ljg\": pod kindnet-d4ljg is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-d4ljg" node="ha-485000-m04"
	E0725 17:46:32.247918       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc0c4eed-b4e4-414b-95cd-1808497c1030(kube-system/kindnet-d4ljg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d4ljg"
	E0725 17:46:32.247933       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d4ljg\": pod kindnet-d4ljg is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-d4ljg"
	I0725 17:46:32.247946       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d4ljg" node="ha-485000-m04"
	E0725 17:46:32.260257       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mvbkh\": pod kube-proxy-mvbkh is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mvbkh" node="ha-485000-m04"
	E0725 17:46:32.260311       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 30908c7f-171c-4d89-8f93-69e304ca0ef0(kube-system/kube-proxy-mvbkh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mvbkh"
	E0725 17:46:32.260323       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mvbkh\": pod kube-proxy-mvbkh is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-mvbkh"
	E0725 17:46:32.260438       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzpdg\": pod kube-proxy-pzpdg is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzpdg" node="ha-485000-m04"
	E0725 17:46:32.260530       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzpdg\": pod kube-proxy-pzpdg is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-pzpdg"
	E0725 17:46:32.261298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h45p5\": pod kindnet-h45p5 is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h45p5" node="ha-485000-m04"
	E0725 17:46:32.261340       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dc5552bb-a604-499f-b14a-fa1a01d2c56b(kube-system/kindnet-h45p5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-h45p5"
	E0725 17:46:32.261350       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h45p5\": pod kindnet-h45p5 is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-h45p5"
	I0725 17:46:32.261359       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h45p5" node="ha-485000-m04"
	I0725 17:46:32.261423       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mvbkh" node="ha-485000-m04"
	E0725 17:46:32.293379       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pndm8\": pod kube-proxy-pndm8 is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pndm8" node="ha-485000-m04"
	E0725 17:46:32.293429       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf54e0c2-73ed-433e-abf5-a9b0d7effd3a(kube-system/kube-proxy-pndm8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pndm8"
	E0725 17:46:32.293442       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pndm8\": pod kube-proxy-pndm8 is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-pndm8"
	I0725 17:46:32.293453       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pndm8" node="ha-485000-m04"
	E0725 17:46:32.295248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cq6bp\": pod kindnet-cq6bp is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cq6bp" node="ha-485000-m04"
	E0725 17:46:32.295438       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f2a36380-2980-4faf-b814-d1ec1200ce84(kube-system/kindnet-cq6bp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cq6bp"
	E0725 17:46:32.295540       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cq6bp\": pod kindnet-cq6bp is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-cq6bp"
	I0725 17:46:32.296219       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cq6bp" node="ha-485000-m04"
	E0725 17:48:18.187370       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.157513    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5370b1e66d0a1bab51d0ae170c1d6f725e46c5b56e817bf301e0de380ab2f12"
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.168285    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4b8e6e60c09771c510e3819e54d688f98d8acf64f71de6c8e6f426c4debe86"
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.168399    1571 scope.go:117] "RemoveContainer" containerID="f34917d25cfb4ba024d1a1cf5effbc6f97bcf2e268abc6bf2361f8c9c5eeb010"
	Jul 25 17:49:44 ha-485000 kubelet[1571]: I0725 17:49:44.115564    1571 scope.go:117] "RemoveContainer" containerID="2fcad10513df422ac171e54526b00f2c84f55db282008fc856542ab865eb1c73"
	Jul 25 17:49:44 ha-485000 kubelet[1571]: E0725 17:49:44.128206    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:49:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: I0725 17:50:02.559839    1571 scope.go:117] "RemoveContainer" containerID="f17c37c2a75729e7739bb6f9c964dcacb64cebbab8024774ad9941e632f91c1a"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: I0725 17:50:02.560088    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: E0725 17:50:02.560196    1571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9d25ef1d-567f-4da3-a62e-16958da26713)\"" pod="kube-system/storage-provisioner" podUID="9d25ef1d-567f-4da3-a62e-16958da26713"
	Jul 25 17:50:15 ha-485000 kubelet[1571]: I0725 17:50:15.087487    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:15 ha-485000 kubelet[1571]: E0725 17:50:15.087667    1571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9d25ef1d-567f-4da3-a62e-16958da26713)\"" pod="kube-system/storage-provisioner" podUID="9d25ef1d-567f-4da3-a62e-16958da26713"
	Jul 25 17:50:27 ha-485000 kubelet[1571]: I0725 17:50:27.087222    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:44 ha-485000 kubelet[1571]: E0725 17:50:44.123421    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:50:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:51:44 ha-485000 kubelet[1571]: E0725 17:51:44.123531    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:51:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-485000 -n ha-485000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-485000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (226.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 node delete m03 -v=7 --alsologtostderr
E0725 10:51:52.279517    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 node delete m03 -v=7 --alsologtostderr: (7.535407254s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr: exit status 2 (330.91206ms)

                                                
                                                
-- stdout --
	ha-485000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-485000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-485000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:51:52.774281    3862 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:51:52.774550    3862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:51:52.774555    3862 out.go:304] Setting ErrFile to fd 2...
	I0725 10:51:52.774559    3862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:51:52.774725    3862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:51:52.774944    3862 out.go:298] Setting JSON to false
	I0725 10:51:52.774967    3862 mustload.go:65] Loading cluster: ha-485000
	I0725 10:51:52.775009    3862 notify.go:220] Checking for updates...
	I0725 10:51:52.775272    3862 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:51:52.775286    3862 status.go:255] checking status of ha-485000 ...
	I0725 10:51:52.775643    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.775702    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.784690    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
	I0725 10:51:52.785086    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.785497    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.785526    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.785720    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.785834    3862 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:51:52.785924    3862 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:51:52.786008    3862 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:51:52.786965    3862 status.go:330] ha-485000 host status = "Running" (err=<nil>)
	I0725 10:51:52.786984    3862 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:51:52.787221    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.787243    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.795876    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
	I0725 10:51:52.796225    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.796606    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.796625    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.796840    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.796952    3862 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:51:52.797044    3862 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:51:52.797317    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.797340    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.805683    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
	I0725 10:51:52.805995    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.806463    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.806481    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.806659    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.806752    3862 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:51:52.806879    3862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:51:52.806900    3862 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:51:52.806972    3862 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:51:52.807070    3862 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:51:52.807148    3862 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:51:52.807221    3862 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:51:52.836067    3862 ssh_runner.go:195] Run: systemctl --version
	I0725 10:51:52.840807    3862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:51:52.852704    3862 kubeconfig.go:125] found "ha-485000" server: "https://192.169.0.254:8443"
	I0725 10:51:52.852729    3862 api_server.go:166] Checking apiserver status ...
	I0725 10:51:52.852766    3862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:51:52.863866    3862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup
	W0725 10:51:52.872053    3862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:51:52.872114    3862 ssh_runner.go:195] Run: ls
	I0725 10:51:52.875293    3862 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0725 10:51:52.879552    3862 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0725 10:51:52.879565    3862 status.go:422] ha-485000 apiserver status = Running (err=<nil>)
	I0725 10:51:52.879575    3862 status.go:257] ha-485000 status: &{Name:ha-485000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:51:52.879591    3862 status.go:255] checking status of ha-485000-m02 ...
	I0725 10:51:52.879870    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.879890    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.888487    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52058
	I0725 10:51:52.888837    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.889165    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.889187    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.889390    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.889513    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:51:52.889600    3862 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:51:52.889684    3862 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3792
	I0725 10:51:52.890639    3862 status.go:330] ha-485000-m02 host status = "Running" (err=<nil>)
	I0725 10:51:52.890649    3862 host.go:66] Checking if "ha-485000-m02" exists ...
	I0725 10:51:52.890916    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.890937    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.899347    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52060
	I0725 10:51:52.899669    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.900007    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.900015    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.900238    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.900354    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:51:52.900432    3862 host.go:66] Checking if "ha-485000-m02" exists ...
	I0725 10:51:52.900684    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.900710    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.909135    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52062
	I0725 10:51:52.909483    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.909787    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.909795    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.909979    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.910082    3862 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:51:52.910198    3862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:51:52.910209    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:51:52.910288    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:51:52.910376    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:51:52.910478    3862 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:51:52.910549    3862 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:51:52.939992    3862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:51:52.951737    3862 kubeconfig.go:125] found "ha-485000" server: "https://192.169.0.254:8443"
	I0725 10:51:52.951754    3862 api_server.go:166] Checking apiserver status ...
	I0725 10:51:52.951796    3862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:51:52.963243    3862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2122/cgroup
	W0725 10:51:52.971024    3862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2122/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:51:52.971076    3862 ssh_runner.go:195] Run: ls
	I0725 10:51:52.974102    3862 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0725 10:51:52.977181    3862 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0725 10:51:52.977192    3862 status.go:422] ha-485000-m02 apiserver status = Running (err=<nil>)
	I0725 10:51:52.977201    3862 status.go:257] ha-485000-m02 status: &{Name:ha-485000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:51:52.977212    3862 status.go:255] checking status of ha-485000-m04 ...
	I0725 10:51:52.977469    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.977491    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.986195    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52066
	I0725 10:51:52.986539    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.986847    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.986864    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.987067    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.987180    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:51:52.987266    3862 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:51:52.987343    3862 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3811
	I0725 10:51:52.988301    3862 status.go:330] ha-485000-m04 host status = "Running" (err=<nil>)
	I0725 10:51:52.988313    3862 host.go:66] Checking if "ha-485000-m04" exists ...
	I0725 10:51:52.988542    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.988565    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:52.996889    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52068
	I0725 10:51:52.997212    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:52.997539    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:52.997554    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:52.997785    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:52.997894    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:51:52.997982    3862 host.go:66] Checking if "ha-485000-m04" exists ...
	I0725 10:51:52.998229    3862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:51:52.998263    3862 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:51:53.006648    3862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52070
	I0725 10:51:53.006978    3862 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:51:53.007282    3862 main.go:141] libmachine: Using API Version  1
	I0725 10:51:53.007291    3862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:51:53.007503    3862 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:51:53.007627    3862 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:51:53.007754    3862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:51:53.007765    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:51:53.007860    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:51:53.007958    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:51:53.008041    3862 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:51:53.008120    3862 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:51:53.039441    3862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:51:53.049795    3862 status.go:257] ha-485000-m04 status: &{Name:ha-485000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-485000 -n ha-485000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 logs -n 25: (3.51823642s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m02 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m03_ha-485000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04:/home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m04 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp testdata/cp-test.txt                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000:/home/docker/cp-test_ha-485000-m04_ha-485000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000 sudo cat                                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m02:/home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m02 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt                                                                          | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m03:/home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n                                                                                                             | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | ha-485000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-485000 ssh -n ha-485000-m03 sudo cat                                                                                      | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | /home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-485000 node stop m02 -v=7                                                                                                 | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-485000 node start m02 -v=7                                                                                                | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:47 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-485000 -v=7                                                                                                       | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-485000 -v=7                                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:47 PDT | 25 Jul 24 10:48 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-485000 --wait=true -v=7                                                                                                | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:48 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-485000                                                                                                            | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:51 PDT |                     |
	| node    | ha-485000 node delete m03 -v=7                                                                                               | ha-485000 | jenkins | v1.33.1 | 25 Jul 24 10:51 PDT | 25 Jul 24 10:51 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:48:26
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:48:26.008505    3774 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:48:26.008703    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008708    3774 out.go:304] Setting ErrFile to fd 2...
	I0725 10:48:26.008712    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:48:26.008889    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:48:26.010330    3774 out.go:298] Setting JSON to false
	I0725 10:48:26.034230    3774 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2876,"bootTime":1721926830,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:48:26.034337    3774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:48:26.057780    3774 out.go:177] * [ha-485000] minikube v1.33.1 on Darwin 14.5
	I0725 10:48:26.099403    3774 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:48:26.099443    3774 notify.go:220] Checking for updates...
	I0725 10:48:26.142252    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:26.163519    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:48:26.184535    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:48:26.205465    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 10:48:26.226618    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:48:26.248320    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:26.248484    3774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:48:26.249112    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.249222    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.258893    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51885
	I0725 10:48:26.259439    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.260047    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.260058    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.260427    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.260644    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.289300    3774 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 10:48:26.331665    3774 start.go:297] selected driver: hyperkit
	I0725 10:48:26.331692    3774 start.go:901] validating driver "hyperkit" against &{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.331911    3774 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:48:26.332099    3774 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.332295    3774 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:48:26.342212    3774 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:48:26.348291    3774 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.348316    3774 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:48:26.351632    3774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:48:26.351670    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:26.351677    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:26.351755    3774 start.go:340] cluster config:
	{Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:26.351859    3774 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:48:26.394511    3774 out.go:177] * Starting "ha-485000" primary control-plane node in "ha-485000" cluster
	I0725 10:48:26.415566    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:26.415642    3774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 10:48:26.415668    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:26.415915    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:26.415934    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:26.416129    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.417069    3774 start.go:360] acquireMachinesLock for ha-485000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:26.417183    3774 start.go:364] duration metric: took 90.924µs to acquireMachinesLock for "ha-485000"
	I0725 10:48:26.417209    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:26.417242    3774 fix.go:54] fixHost starting: 
	I0725 10:48:26.417573    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:26.417601    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:26.426437    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51887
	I0725 10:48:26.426806    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:26.427140    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:26.427151    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:26.427362    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:26.427487    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.427621    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:48:26.427738    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.427816    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3271
	I0725 10:48:26.428722    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.428789    3774 fix.go:112] recreateIfNeeded on ha-485000: state=Stopped err=<nil>
	I0725 10:48:26.428816    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	W0725 10:48:26.428913    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:26.450263    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000" ...
	I0725 10:48:26.492478    3774 main.go:141] libmachine: (ha-485000) Calling .Start
	I0725 10:48:26.492777    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.492834    3774 main.go:141] libmachine: (ha-485000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid
	I0725 10:48:26.494964    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3271 missing from process table
	I0725 10:48:26.494992    3774 main.go:141] libmachine: (ha-485000) DBG | pid 3271 is in state "Stopped"
	I0725 10:48:26.495011    3774 main.go:141] libmachine: (ha-485000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid...
	I0725 10:48:26.495351    3774 main.go:141] libmachine: (ha-485000) DBG | Using UUID 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3
	I0725 10:48:26.602890    3774 main.go:141] libmachine: (ha-485000) DBG | Generated MAC 52:76:82:a1:51:13
	I0725 10:48:26.602911    3774 main.go:141] libmachine: (ha-485000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:26.603041    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603067    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003947b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:26.603128    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:26.603166    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bc5138d-7fab-4fb5-b22e-c84ce3b6b1d3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/ha-485000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:26.603183    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:26.604450    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 DEBUG: hyperkit: Pid is 3787
	I0725 10:48:26.604799    3774 main.go:141] libmachine: (ha-485000) DBG | Attempt 0
	I0725 10:48:26.604824    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:26.604870    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:48:26.606553    3774 main.go:141] libmachine: (ha-485000) DBG | Searching for 52:76:82:a1:51:13 in /var/db/dhcpd_leases ...
	I0725 10:48:26.606607    3774 main.go:141] libmachine: (ha-485000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:26.606642    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:26.606660    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:26.606672    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:48:26.606684    3774 main.go:141] libmachine: (ha-485000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e019}
	I0725 10:48:26.606696    3774 main.go:141] libmachine: (ha-485000) DBG | Found match: 52:76:82:a1:51:13
	I0725 10:48:26.606707    3774 main.go:141] libmachine: (ha-485000) DBG | IP: 192.169.0.5
	I0725 10:48:26.606731    3774 main.go:141] libmachine: (ha-485000) Calling .GetConfigRaw
	I0725 10:48:26.607371    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:26.607542    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:26.608260    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:26.608270    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:26.608385    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:26.608483    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:26.608567    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608654    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:26.608755    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:26.608878    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:26.609107    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:26.609118    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:26.612320    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:26.665658    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:26.666425    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:26.666446    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:26.666486    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:26.666502    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.049138    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:27.049167    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:27.163716    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:27.163734    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:27.163745    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:27.163771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:27.164666    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:27.164679    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:32.750889    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:32.750945    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:32.750956    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:32.776771    3774 main.go:141] libmachine: (ha-485000) DBG | 2024/07/25 10:48:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:37.667735    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:37.667749    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.667914    3774 buildroot.go:166] provisioning hostname "ha-485000"
	I0725 10:48:37.667925    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.668027    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.668112    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.668192    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668288    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.668362    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.668500    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.668656    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.668664    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000 && echo "ha-485000" | sudo tee /etc/hostname
	I0725 10:48:37.727283    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000
	
	I0725 10:48:37.727299    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.727438    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.727532    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727625    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.727717    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.727855    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.727982    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.727993    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:37.785056    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:37.785076    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:37.785094    3774 buildroot.go:174] setting up certificates
	I0725 10:48:37.785101    3774 provision.go:84] configureAuth start
	I0725 10:48:37.785108    3774 main.go:141] libmachine: (ha-485000) Calling .GetMachineName
	I0725 10:48:37.785245    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:37.785333    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.785431    3774 provision.go:143] copyHostCerts
	I0725 10:48:37.785463    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785523    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:37.785532    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:37.785675    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:37.785906    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.785936    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:37.785941    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:37.786010    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:37.786157    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786185    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:37.786190    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:37.786257    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:37.786415    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000 san=[127.0.0.1 192.169.0.5 ha-485000 localhost minikube]
	I0725 10:48:37.823550    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:37.823600    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:37.823615    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.823730    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.823832    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.823929    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.824016    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:37.858513    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:37.858593    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0725 10:48:37.877705    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:37.877768    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:37.897239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:37.897295    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:37.916551    3774 provision.go:87] duration metric: took 131.43608ms to configureAuth
	I0725 10:48:37.916563    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:37.916723    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:37.916742    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:37.916888    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.916985    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.917074    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917167    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.917240    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.917351    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.917476    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.917483    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:37.966249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:37.966266    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:37.966344    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:37.966356    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:37.966476    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:37.966563    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966659    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:37.966744    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:37.966879    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:37.967017    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:37.967059    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:38.025932    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:38.025951    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:38.026084    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:38.026186    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026311    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:38.026409    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:38.026537    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:38.026681    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:38.026694    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:39.678604    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:39.678619    3774 machine.go:97] duration metric: took 13.070176391s to provisionDockerMachine
	I0725 10:48:39.678630    3774 start.go:293] postStartSetup for "ha-485000" (driver="hyperkit")
	I0725 10:48:39.678637    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:39.678650    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.678827    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:39.678844    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.678949    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.679038    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.679143    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.679235    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.716368    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:39.720567    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:39.720581    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:39.720675    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:39.720817    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:39.720823    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:39.720982    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:39.729186    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:39.758308    3774 start.go:296] duration metric: took 79.656539ms for postStartSetup
	I0725 10:48:39.758333    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.758515    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:39.758527    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.758630    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.758718    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.758821    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.758909    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.790766    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:39.790818    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:39.841417    3774 fix.go:56] duration metric: took 13.423999639s for fixHost
	I0725 10:48:39.841437    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.841579    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.841669    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841753    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.841830    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.841969    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:39.842111    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0725 10:48:39.842118    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:48:39.893711    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929719.869493557
	
	I0725 10:48:39.893723    3774 fix.go:216] guest clock: 1721929719.869493557
	I0725 10:48:39.893729    3774 fix.go:229] Guest: 2024-07-25 10:48:39.869493557 -0700 PDT Remote: 2024-07-25 10:48:39.841427 -0700 PDT m=+13.869378775 (delta=28.066557ms)
	I0725 10:48:39.893749    3774 fix.go:200] guest clock delta is within tolerance: 28.066557ms
	I0725 10:48:39.893753    3774 start.go:83] releasing machines lock for "ha-485000", held for 13.476381445s
	I0725 10:48:39.893772    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.893900    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:39.894007    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894332    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894445    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:48:39.894524    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:39.894561    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894577    3774 ssh_runner.go:195] Run: cat /version.json
	I0725 10:48:39.894588    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:48:39.894652    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894680    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:48:39.894746    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894757    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:48:39.894831    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894854    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:48:39.894930    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.894951    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:48:39.969105    3774 ssh_runner.go:195] Run: systemctl --version
	I0725 10:48:39.974344    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 10:48:39.978550    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:39.978588    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:39.992374    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:39.992386    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:39.992494    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.010041    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:40.018981    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:40.027827    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.027880    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:40.036849    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.045802    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:40.054565    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:40.063403    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:40.072492    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:40.081289    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:40.089964    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:40.098883    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:40.106915    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:40.114912    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.213735    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:40.232850    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:40.232927    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:40.247072    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.260328    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:40.277505    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:40.288634    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.302282    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:40.323941    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:40.334346    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:40.349841    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:40.352851    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:40.359956    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:40.373249    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:40.468940    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:40.562165    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:40.562232    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:40.576420    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:40.666619    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:48:42.973495    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306818339s)
	I0725 10:48:42.973567    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:48:42.984136    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:48:42.997023    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.007459    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:48:43.101460    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:48:43.206558    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.318643    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:48:43.332235    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:48:43.343300    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.439386    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:48:43.504079    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:48:43.504167    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:48:43.509100    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:48:43.509160    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:48:43.514298    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:48:43.540285    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:48:43.540359    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.556856    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:48:43.619142    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:48:43.619193    3774 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:48:43.619596    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:48:43.624261    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.634147    3774 kubeadm.go:883] updating cluster {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 10:48:43.634230    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:43.634284    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.648086    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.648098    3774 docker.go:615] Images already preloaded, skipping extraction
	I0725 10:48:43.648178    3774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 10:48:43.661887    3774 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0725 10:48:43.661905    3774 cache_images.go:84] Images are preloaded, skipping loading
	I0725 10:48:43.661914    3774 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0725 10:48:43.661994    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:48:43.662065    3774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 10:48:43.699921    3774 cni.go:84] Creating CNI manager for ""
	I0725 10:48:43.699936    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:48:43.699949    3774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 10:48:43.699966    3774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-485000 NodeName:ha-485000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 10:48:43.700056    3774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-485000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 10:48:43.700077    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:48:43.700127    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:48:43.712809    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:48:43.712873    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:48:43.712925    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:48:43.721182    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:48:43.721226    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 10:48:43.728575    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0725 10:48:43.742374    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:48:43.755567    3774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 10:48:43.769800    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:48:43.783433    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:48:43.786504    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:48:43.795954    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:43.896290    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:48:43.910403    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.5
	I0725 10:48:43.910418    3774 certs.go:194] generating shared ca certs ...
	I0725 10:48:43.910428    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:43.910590    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:48:43.910647    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:48:43.910658    3774 certs.go:256] generating profile certs ...
	I0725 10:48:43.910746    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:48:43.910769    3774 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f
	I0725 10:48:43.910786    3774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0725 10:48:44.010960    3774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f ...
	I0725 10:48:44.010977    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f: {Name:mka1c7bb5889cefec4fa34bda59b0dccc014b849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011374    3774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f ...
	I0725 10:48:44.011384    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f: {Name:mk2a7443f9ec44bdbab1eccd742bb8d7bd46104e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.011591    3774 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt
	I0725 10:48:44.011796    3774 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.26f1378f -> /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key
	I0725 10:48:44.012023    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:48:44.012033    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:48:44.012056    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:48:44.012075    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:48:44.012095    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:48:44.012113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:48:44.012132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:48:44.012152    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:48:44.012170    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:48:44.012249    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:48:44.012300    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:48:44.012308    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:48:44.012345    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:48:44.012379    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:48:44.012417    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:48:44.012485    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:44.012517    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.012537    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.012555    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.013001    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:48:44.040701    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:48:44.077388    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:48:44.112787    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:48:44.159876    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:48:44.190098    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:48:44.210542    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:48:44.230450    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:48:44.251339    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:48:44.271102    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:48:44.290804    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:48:44.310754    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 10:48:44.324090    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:48:44.328453    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:48:44.336951    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340313    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.340348    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:48:44.344548    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:48:44.352831    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:48:44.360980    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364473    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.364507    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:48:44.368814    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:48:44.377043    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:48:44.385344    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388809    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.388844    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:48:44.393238    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:48:44.401504    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:48:44.404983    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:48:44.409808    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:48:44.414092    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:48:44.418841    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:48:44.423109    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:48:44.427402    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:48:44.432185    3774 kubeadm.go:392] StartCluster: {Name:ha-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:48:44.432302    3774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 10:48:44.448953    3774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 10:48:44.456669    3774 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 10:48:44.456681    3774 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 10:48:44.456727    3774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 10:48:44.465243    3774 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:48:44.465557    3774 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-485000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.465649    3774 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1195/kubeconfig needs updating (will repair): [kubeconfig missing "ha-485000" cluster setting kubeconfig missing "ha-485000" context setting]
	I0725 10:48:44.465837    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.466249    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.466441    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 10:48:44.466804    3774 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 10:48:44.466963    3774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 10:48:44.474340    3774 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0725 10:48:44.474351    3774 kubeadm.go:597] duration metric: took 17.665834ms to restartPrimaryControlPlane
	I0725 10:48:44.474370    3774 kubeadm.go:394] duration metric: took 42.188275ms to StartCluster
	I0725 10:48:44.474382    3774 settings.go:142] acquiring lock: {Name:mk4f7e43bf5353228d4c27f1f08450065f65cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.474454    3774 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:48:44.474852    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/kubeconfig: {Name:mkc008ed365c6765ce04b67847a4585ec214b70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:48:44.475072    3774 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:48:44.475085    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:48:44.475106    3774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 10:48:44.475233    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.518490    3774 out.go:177] * Enabled addons: 
	I0725 10:48:44.539102    3774 addons.go:510] duration metric: took 64.005371ms for enable addons: enabled=[]
	I0725 10:48:44.539140    3774 start.go:246] waiting for cluster config update ...
	I0725 10:48:44.539164    3774 start.go:255] writing updated cluster config ...
	I0725 10:48:44.561436    3774 out.go:177] 
	I0725 10:48:44.582967    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:44.583098    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.605394    3774 out.go:177] * Starting "ha-485000-m02" control-plane node in "ha-485000" cluster
	I0725 10:48:44.647510    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:48:44.647544    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:48:44.647720    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:48:44.647738    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:48:44.647870    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.648902    3774 start.go:360] acquireMachinesLock for ha-485000-m02: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:48:44.649015    3774 start.go:364] duration metric: took 81.917µs to acquireMachinesLock for "ha-485000-m02"
	I0725 10:48:44.649041    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:48:44.649050    3774 fix.go:54] fixHost starting: m02
	I0725 10:48:44.649495    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:48:44.649529    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:48:44.659031    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51909
	I0725 10:48:44.659557    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:48:44.659989    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:48:44.660004    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:48:44.660364    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:48:44.660504    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.660702    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:48:44.660973    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.661074    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3731
	I0725 10:48:44.661956    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.662000    3774 fix.go:112] recreateIfNeeded on ha-485000-m02: state=Stopped err=<nil>
	I0725 10:48:44.662009    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	W0725 10:48:44.662092    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:48:44.704559    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m02" ...
	I0725 10:48:44.726283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .Start
	I0725 10:48:44.726558    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.726661    3774 main.go:141] libmachine: (ha-485000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid
	I0725 10:48:44.728373    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3731 missing from process table
	I0725 10:48:44.728390    3774 main.go:141] libmachine: (ha-485000-m02) DBG | pid 3731 is in state "Stopped"
	I0725 10:48:44.728407    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid...
	I0725 10:48:44.728847    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Using UUID 528f0647-a045-4ab7-922b-886237fb4fc4
	I0725 10:48:44.756033    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Generated MAC c2:64:80:a8:d2:48
	I0725 10:48:44.756067    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:48:44.756191    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756227    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"528f0647-a045-4ab7-922b-886237fb4fc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003968d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:48:44.756275    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "528f0647-a045-4ab7-922b-886237fb4fc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:48:44.756319    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 528f0647-a045-4ab7-922b-886237fb4fc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/ha-485000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:48:44.756334    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:48:44.757674    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 DEBUG: hyperkit: Pid is 3792
	I0725 10:48:44.758132    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Attempt 0
	I0725 10:48:44.758146    3774 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:48:44.758210    3774 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3792
	I0725 10:48:44.759852    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Searching for c2:64:80:a8:d2:48 in /var/db/dhcpd_leases ...
	I0725 10:48:44.759913    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:48:44.759930    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:48:44.759945    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:48:44.759953    3774 main.go:141] libmachine: (ha-485000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e130}
	I0725 10:48:44.759960    3774 main.go:141] libmachine: (ha-485000-m02) DBG | Found match: c2:64:80:a8:d2:48
	I0725 10:48:44.759970    3774 main.go:141] libmachine: (ha-485000-m02) DBG | IP: 192.169.0.6
	I0725 10:48:44.759997    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetConfigRaw
	I0725 10:48:44.760701    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:44.760893    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:48:44.761371    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:48:44.761383    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:44.761484    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:44.761567    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:44.761671    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761791    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:44.761906    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:44.762039    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:44.762188    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:44.762196    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:48:44.765251    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:48:44.773188    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:48:44.774148    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:44.774173    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:44.774196    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:44.774224    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.156825    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:48:45.156838    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:48:45.271856    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:48:45.271872    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:48:45.271881    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:48:45.271892    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:48:45.272766    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:48:45.272776    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:48:50.885003    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:48:50.885077    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:48:50.885089    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:48:50.908756    3774 main.go:141] libmachine: (ha-485000-m02) DBG | 2024/07/25 10:48:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:48:55.821181    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:48:55.821195    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821334    3774 buildroot.go:166] provisioning hostname "ha-485000-m02"
	I0725 10:48:55.821345    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.821436    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.821525    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.821602    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821685    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.821770    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.821916    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.822063    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.822074    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m02 && echo "ha-485000-m02" | sudo tee /etc/hostname
	I0725 10:48:55.882249    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m02
	
	I0725 10:48:55.882268    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.882410    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:55.882498    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882588    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:55.882688    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:55.882825    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:55.883013    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:55.883027    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:48:55.939117    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:48:55.939132    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:48:55.939142    3774 buildroot.go:174] setting up certificates
	I0725 10:48:55.939148    3774 provision.go:84] configureAuth start
	I0725 10:48:55.939154    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetMachineName
	I0725 10:48:55.939283    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:55.939381    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:55.939461    3774 provision.go:143] copyHostCerts
	I0725 10:48:55.939491    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939543    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:48:55.939549    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:48:55.939688    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:48:55.939893    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.939923    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:48:55.939928    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:48:55.940045    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:48:55.940199    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940230    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:48:55.940235    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:48:55.940305    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:48:55.940447    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m02 san=[127.0.0.1 192.169.0.6 ha-485000-m02 localhost minikube]
	I0725 10:48:56.088970    3774 provision.go:177] copyRemoteCerts
	I0725 10:48:56.089020    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:48:56.089034    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.089186    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.089282    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.089402    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.089501    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:56.122259    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:48:56.122325    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:48:56.141398    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:48:56.141472    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:48:56.160336    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:48:56.160401    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:48:56.179351    3774 provision.go:87] duration metric: took 240.193399ms to configureAuth
	I0725 10:48:56.179364    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:48:56.179528    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:48:56.179541    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:56.179672    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.179753    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.179827    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179907    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.179983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.180095    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.180218    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.180226    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:48:56.231701    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:48:56.231712    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:48:56.231785    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:48:56.231798    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.231926    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.232020    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232113    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.232213    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.232352    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.232487    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.232547    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:48:56.292824    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:48:56.292843    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:56.292983    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:56.293079    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293173    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:56.293276    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:56.293398    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:56.293536    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:56.293548    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:48:57.936294    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:48:57.936308    3774 machine.go:97] duration metric: took 13.174752883s to provisionDockerMachine
	I0725 10:48:57.936315    3774 start.go:293] postStartSetup for "ha-485000-m02" (driver="hyperkit")
	I0725 10:48:57.936322    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:48:57.936333    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:57.936508    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:48:57.936520    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:57.936625    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:57.936725    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:57.936811    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:57.936919    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:57.973264    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:48:57.978182    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:48:57.978195    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:48:57.978293    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:48:57.978433    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:48:57.978439    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:48:57.978595    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:48:57.987699    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:48:58.019591    3774 start.go:296] duration metric: took 83.266386ms for postStartSetup
	I0725 10:48:58.019613    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.019795    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:48:58.019808    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.019904    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.019990    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.020087    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.020182    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.051727    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:48:58.051783    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:48:58.105518    3774 fix.go:56] duration metric: took 13.45628652s for fixHost
	I0725 10:48:58.105546    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.105686    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.105772    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105857    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.105932    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.106046    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:48:58.106195    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0725 10:48:58.106205    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:48:58.159243    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929738.065939069
	
	I0725 10:48:58.159255    3774 fix.go:216] guest clock: 1721929738.065939069
	I0725 10:48:58.159260    3774 fix.go:229] Guest: 2024-07-25 10:48:58.065939069 -0700 PDT Remote: 2024-07-25 10:48:58.105535 -0700 PDT m=+32.133243684 (delta=-39.595931ms)
	I0725 10:48:58.159284    3774 fix.go:200] guest clock delta is within tolerance: -39.595931ms
	I0725 10:48:58.159289    3774 start.go:83] releasing machines lock for "ha-485000-m02", held for 13.510083839s
	I0725 10:48:58.159306    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.159443    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:48:58.185128    3774 out.go:177] * Found network options:
	I0725 10:48:58.204774    3774 out.go:177]   - NO_PROXY=192.169.0.5
	W0725 10:48:58.225878    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.225912    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226598    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226812    3774 main.go:141] libmachine: (ha-485000-m02) Calling .DriverName
	I0725 10:48:58.226934    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:48:58.226975    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	W0725 10:48:58.227058    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:48:58.227182    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:48:58.227214    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHHostname
	I0725 10:48:58.227277    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227471    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHPort
	I0725 10:48:58.227493    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227612    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227649    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHKeyPath
	I0725 10:48:58.227722    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	I0725 10:48:58.227752    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetSSHUsername
	I0725 10:48:58.227862    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m02/id_rsa Username:docker}
	W0725 10:48:58.255968    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:48:58.256032    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:48:58.313620    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:48:58.313643    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.313758    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.330047    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:48:58.338245    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:48:58.346310    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.346349    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:48:58.354315    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.362619    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:48:58.370851    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:48:58.379085    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:48:58.387426    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:48:58.395620    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:48:58.403886    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:48:58.412116    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:48:58.419752    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:48:58.427324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.523289    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:48:58.542645    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:48:58.542713    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:48:58.555600    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.573132    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:48:58.586107    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:48:58.596266    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.606623    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:48:58.626833    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:48:58.637094    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:48:58.651924    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:48:58.654935    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:48:58.662286    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:48:58.675716    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:48:58.765779    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:48:58.866546    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:48:58.866576    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:48:58.880570    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:48:58.988028    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:01.326397    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338319421s)
	I0725 10:49:01.326462    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:01.336948    3774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 10:49:01.349778    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.360626    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:01.455101    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:01.569356    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.667972    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:49:01.681113    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:01.691490    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:01.801249    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:49:01.864595    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:49:01.864666    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:49:01.869013    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:49:01.869064    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:49:01.872470    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:49:01.897402    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:49:01.897474    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.915840    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:49:01.955682    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:49:01.997327    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:49:02.018327    3774 main.go:141] libmachine: (ha-485000-m02) Calling .GetIP
	I0725 10:49:02.018733    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:49:02.023070    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.032355    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:49:02.032524    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.032743    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.032766    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.041277    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51931
	I0725 10:49:02.041655    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.041981    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.041992    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.042211    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.042328    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:49:02.042405    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:02.042478    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:49:02.043429    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:49:02.043673    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:02.043701    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:02.052045    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51933
	I0725 10:49:02.052388    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:02.052739    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:02.052755    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:02.052955    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:02.053107    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:49:02.053221    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.6
	I0725 10:49:02.053230    3774 certs.go:194] generating shared ca certs ...
	I0725 10:49:02.053241    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:49:02.053422    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:49:02.053492    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:49:02.053502    3774 certs.go:256] generating profile certs ...
	I0725 10:49:02.053609    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:49:02.053685    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.71a9457c
	I0725 10:49:02.053735    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:49:02.053742    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:49:02.053762    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:49:02.053782    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:49:02.053800    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:49:02.053818    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:49:02.053836    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:49:02.053855    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:49:02.053873    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:49:02.053951    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:49:02.054004    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:49:02.054013    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:49:02.054048    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:49:02.054088    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:49:02.054118    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:49:02.054190    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:02.054224    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.054248    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.054268    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.054296    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:49:02.054399    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:49:02.054491    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:49:02.054572    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:49:02.054658    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:49:02.079258    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 10:49:02.082822    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:49:02.090699    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 10:49:02.093745    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:49:02.101464    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:49:02.104537    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:49:02.112169    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:49:02.115278    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:49:02.123703    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:49:02.126716    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:49:02.134446    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 10:49:02.137824    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:49:02.146212    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:49:02.166591    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:49:02.186453    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:49:02.205945    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:49:02.225778    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:49:02.245674    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:49:02.266075    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:49:02.286003    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:49:02.305311    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:49:02.325216    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:49:02.345019    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:49:02.365056    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:49:02.378609    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:49:02.392247    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:49:02.405745    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:49:02.419356    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:49:02.432750    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:49:02.446244    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:49:02.459911    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:49:02.464066    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:49:02.472406    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475732    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.475780    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:49:02.479985    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:49:02.488332    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:49:02.496582    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.499979    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.500026    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:49:02.504179    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:49:02.513038    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:49:02.521433    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524771    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.524804    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:49:02.528889    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:49:02.537109    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:49:02.540476    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:49:02.544803    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:49:02.548989    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:49:02.553131    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:49:02.557276    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:49:02.561375    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:49:02.565522    3774 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0725 10:49:02.565585    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:49:02.565604    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:49:02.565637    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:49:02.578066    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:49:02.578105    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:49:02.578150    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:49:02.585892    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:49:02.585944    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:49:02.593082    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:49:02.606549    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:49:02.620275    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:49:02.633644    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:49:02.636442    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:49:02.645688    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.737901    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.753131    3774 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:49:02.753310    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:02.774683    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:49:02.795324    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:02.913425    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:49:02.928242    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:49:02.928448    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:49:02.928483    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:49:02.928641    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:02.928720    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:02.928725    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:02.928733    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:02.928736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.586324    3774 round_trippers.go:574] Response Status: 200 OK in 9657 milliseconds
	I0725 10:49:12.588091    3774 node_ready.go:49] node "ha-485000-m02" has status "Ready":"True"
	I0725 10:49:12.588104    3774 node_ready.go:38] duration metric: took 9.659318554s for node "ha-485000-m02" to be "Ready" ...
	I0725 10:49:12.588113    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:12.588160    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:12.588167    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.588173    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.588177    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.640690    3774 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0725 10:49:12.646847    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.646903    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:49:12.646909    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.646915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.646917    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.650764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.651266    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.651274    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.651280    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.651283    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.655046    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.655360    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.655369    3774 pod_ready.go:81] duration metric: took 8.506318ms for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655377    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.655414    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:49:12.655418    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.655424    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.655428    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.657266    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.657799    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.657806    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.657811    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.657815    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.659256    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.659713    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.659721    3774 pod_ready.go:81] duration metric: took 4.339404ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659728    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.659764    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:49:12.659769    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.659775    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.659779    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.661249    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.661658    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:12.661665    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.661671    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.661674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.662972    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.663349    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.663356    3774 pod_ready.go:81] duration metric: took 3.624252ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663362    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.663396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:49:12.663402    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.663407    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.663412    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.665923    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:12.666288    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:12.666295    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.666300    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.666303    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.667727    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.668096    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.668110    3774 pod_ready.go:81] duration metric: took 4.73801ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668116    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.668146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:49:12.668151    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.668156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.668160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.669612    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:12.789876    3774 request.go:629] Waited for 119.652546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789922    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:12.789939    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.789951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.789958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.792981    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:12.793487    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:12.793499    3774 pod_ready.go:81] duration metric: took 125.375312ms for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.793518    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:12.989031    3774 request.go:629] Waited for 195.453141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:49:12.989166    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:12.989181    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:12.989188    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:12.991803    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.188209    3774 request.go:629] Waited for 195.602163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188275    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:13.188289    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.188295    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.188299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.190503    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.190953    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.190962    3774 pod_ready.go:81] duration metric: took 397.432093ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.190969    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.390284    3774 request.go:629] Waited for 199.267222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390387    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:49:13.390398    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.390409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.390414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.393500    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.589029    3774 request.go:629] Waited for 194.741072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589060    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:13.589065    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.589074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.589108    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.591724    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.592136    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.592146    3774 pod_ready.go:81] duration metric: took 401.165409ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.592153    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.789047    3774 request.go:629] Waited for 196.700248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789110    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:49:13.789120    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.789143    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.789150    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.792302    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:13.988478    3774 request.go:629] Waited for 195.547657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988571    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:13.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:13.988590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:13.988601    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:13.991591    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:13.992155    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:13.992168    3774 pod_ready.go:81] duration metric: took 400.004283ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:13.992177    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.188856    3774 request.go:629] Waited for 196.606719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189024    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:49:14.189035    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.189046    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.189056    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.192692    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.388336    3774 request.go:629] Waited for 195.165082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388469    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:14.388481    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.388492    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.388500    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.392008    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:14.392470    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.392479    3774 pod_ready.go:81] duration metric: took 400.290042ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.392486    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.589221    3774 request.go:629] Waited for 196.680325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:49:14.589265    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.589271    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.589276    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.591675    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:14.788243    3774 request.go:629] Waited for 196.189639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788314    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:14.788319    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.788325    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.788338    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.790264    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:14.790682    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:14.790691    3774 pod_ready.go:81] duration metric: took 398.194597ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.790698    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:14.989573    3774 request.go:629] Waited for 198.821418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989634    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:49:14.989642    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:14.989650    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:14.989656    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:14.992325    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.189580    3774 request.go:629] Waited for 196.795704ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189705    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.189717    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.189728    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.189736    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.193091    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.193587    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.193598    3774 pod_ready.go:81] duration metric: took 402.889494ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.193607    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.388771    3774 request.go:629] Waited for 195.112594ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388925    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:49:15.388937    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.388951    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.388957    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.391994    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:15.589664    3774 request.go:629] Waited for 197.220516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589694    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:15.589699    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.589737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.589742    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.683154    3774 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0725 10:49:15.683671    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.683684    3774 pod_ready.go:81] duration metric: took 490.064641ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.683693    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.789750    3774 request.go:629] Waited for 106.017908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789823    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:49:15.789840    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.789847    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.789850    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.791774    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:15.988478    3774 request.go:629] Waited for 196.346928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988570    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:15.988579    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:15.988587    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:15.988593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:15.990813    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:15.991161    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:15.991171    3774 pod_ready.go:81] duration metric: took 307.468405ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:15.991178    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.188563    3774 request.go:629] Waited for 197.337332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188614    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:49:16.188625    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.188638    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.188644    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.191974    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.388347    3774 request.go:629] Waited for 195.920607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388377    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:16.388382    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.388388    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.388392    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.390392    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:16.390667    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.390675    3774 pod_ready.go:81] duration metric: took 399.488144ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.390682    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.588555    3774 request.go:629] Waited for 197.808091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588661    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:49:16.588673    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.588684    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.588693    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.591719    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.789759    3774 request.go:629] Waited for 197.31897ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789876    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:49:16.789887    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.789898    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.789919    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.793070    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:16.793486    3774 pod_ready.go:92] pod "kube-proxy-mvbkh" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:16.793498    3774 pod_ready.go:81] duration metric: took 402.805905ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.793509    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:16.989321    3774 request.go:629] Waited for 195.762555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989391    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:49:16.989399    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:16.989406    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:16.989412    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:16.991782    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.188598    3774 request.go:629] Waited for 196.377768ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188628    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:49:17.188633    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.188682    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.188688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.190695    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:49:17.190988    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.190998    3774 pod_ready.go:81] duration metric: took 397.476264ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.191012    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.388526    3774 request.go:629] Waited for 197.466794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388597    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:49:17.388604    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.388613    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.388618    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.391737    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.588483    3774 request.go:629] Waited for 195.44881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588569    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:49:17.588577    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.588586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.588593    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.591164    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.591519    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.591528    3774 pod_ready.go:81] duration metric: took 400.505326ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.591535    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.789192    3774 request.go:629] Waited for 197.613853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789246    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:49:17.789256    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.789265    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.789271    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.792807    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:17.989780    3774 request.go:629] Waited for 196.433914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989847    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:49:17.989853    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:17.989859    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:17.989862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:17.991949    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:17.992236    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:49:17.992245    3774 pod_ready.go:81] duration metric: took 400.700976ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:49:17.992253    3774 pod_ready.go:38] duration metric: took 5.404058179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:49:17.992267    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:49:17.992318    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:49:18.004356    3774 api_server.go:72] duration metric: took 15.250996818s to wait for apiserver process to appear ...
	I0725 10:49:18.004369    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:49:18.004385    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:49:18.008895    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:49:18.008938    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:49:18.008944    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.008958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.008961    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.009486    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:49:18.009545    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:49:18.009554    3774 api_server.go:131] duration metric: took 5.181232ms to wait for apiserver health ...
	I0725 10:49:18.009561    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:49:18.188337    3774 request.go:629] Waited for 178.740534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188382    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.188390    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.188435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.188439    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.193593    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.200076    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:49:18.200099    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.200103    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.200106    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.200112    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.200115    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.200119    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.200121    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.200123    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.200127    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.200131    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.200135    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.200139    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.200142    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.200147    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.200151    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.200154    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.200156    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.200160    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.200163    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.200166    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.200170    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.200173    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.200178    3774 system_pods.go:61] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.200181    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.200184    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.200186    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.200193    3774 system_pods.go:74] duration metric: took 190.622361ms to wait for pod list to return data ...
	I0725 10:49:18.200199    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:49:18.388524    3774 request.go:629] Waited for 188.275556ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388557    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:49:18.388562    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.388570    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.388573    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.390924    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:49:18.391069    3774 default_sa.go:45] found service account: "default"
	I0725 10:49:18.391078    3774 default_sa.go:55] duration metric: took 190.872598ms for default service account to be created ...
	I0725 10:49:18.391084    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:49:18.588425    3774 request.go:629] Waited for 197.306337ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588458    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:49:18.588463    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.588469    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.588474    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.593698    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:49:18.599145    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:49:18.599162    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:49:18.599167    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:49:18.599170    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:49:18.599175    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 10:49:18.599194    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:49:18.599199    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:49:18.599204    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:49:18.599207    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:49:18.599213    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0725 10:49:18.599216    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:49:18.599223    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 10:49:18.599227    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:49:18.599232    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:49:18.599237    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 10:49:18.599241    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:49:18.599245    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:49:18.599249    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:49:18.599253    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 10:49:18.599272    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:49:18.599280    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:49:18.599286    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 10:49:18.599291    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:49:18.599295    3774 system_pods.go:89] "kube-vip-ha-485000" [9a116dbb-295f-4e79-8ab5-61e81ab7ddf4] Running
	I0725 10:49:18.599298    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:49:18.599301    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:49:18.599304    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running
	I0725 10:49:18.599309    3774 system_pods.go:126] duration metric: took 208.21895ms to wait for k8s-apps to be running ...
	I0725 10:49:18.599322    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:49:18.599377    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:49:18.611737    3774 system_svc.go:56] duration metric: took 12.412676ms WaitForService to wait for kubelet
	I0725 10:49:18.611752    3774 kubeadm.go:582] duration metric: took 15.858385641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:49:18.611763    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:49:18.789774    3774 request.go:629] Waited for 177.962916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789887    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:49:18.789910    3774 round_trippers.go:469] Request Headers:
	I0725 10:49:18.789924    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:49:18.789930    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:49:18.793551    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:49:18.794424    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794438    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794448    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794452    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794455    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794458    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794462    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:49:18.794465    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:49:18.794468    3774 node_conditions.go:105] duration metric: took 182.699541ms to run NodePressure ...
	I0725 10:49:18.794476    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:49:18.794495    3774 start.go:255] writing updated cluster config ...
	I0725 10:49:18.818273    3774 out.go:177] 
	I0725 10:49:18.857228    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:18.857312    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.879124    3774 out.go:177] * Starting "ha-485000-m03" control-plane node in "ha-485000" cluster
	I0725 10:49:18.920865    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:49:18.920897    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:49:18.921101    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:49:18.921120    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:49:18.921243    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:18.922716    3774 start.go:360] acquireMachinesLock for ha-485000-m03: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:49:18.922851    3774 start.go:364] duration metric: took 109.932µs to acquireMachinesLock for "ha-485000-m03"
	I0725 10:49:18.922881    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:49:18.922891    3774 fix.go:54] fixHost starting: m03
	I0725 10:49:18.923315    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:49:18.923351    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:49:18.932987    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51938
	I0725 10:49:18.933376    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:49:18.933781    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:49:18.933802    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:49:18.934032    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:49:18.934154    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:18.934244    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetState
	I0725 10:49:18.934342    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:18.934436    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3293
	I0725 10:49:18.935384    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:18.935414    3774 fix.go:112] recreateIfNeeded on ha-485000-m03: state=Stopped err=<nil>
	I0725 10:49:18.935426    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	W0725 10:49:18.935534    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:49:18.973151    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m03" ...
	I0725 10:49:19.030981    3774 main.go:141] libmachine: (ha-485000-m03) Calling .Start
	I0725 10:49:19.031240    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.031347    3774 main.go:141] libmachine: (ha-485000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid
	I0725 10:49:19.033047    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid 3293 missing from process table
	I0725 10:49:19.033059    3774 main.go:141] libmachine: (ha-485000-m03) DBG | pid 3293 is in state "Stopped"
	I0725 10:49:19.033079    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid...
	I0725 10:49:19.033332    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Using UUID 8bec60ab-aefc-4069-8cc1-870073932ec4
	I0725 10:49:19.063128    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Generated MAC f2:df:a:a6:c4:51
	I0725 10:49:19.063160    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:49:19.063291    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063329    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8bec60ab-aefc-4069-8cc1-870073932ec4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ba960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:49:19.063383    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8bec60ab-aefc-4069-8cc1-870073932ec4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:49:19.063428    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8bec60ab-aefc-4069-8cc1-870073932ec4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/ha-485000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:49:19.063456    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:49:19.065029    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 DEBUG: hyperkit: Pid is 3803
	I0725 10:49:19.065402    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Attempt 0
	I0725 10:49:19.065431    3774 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:49:19.065485    3774 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3803
	I0725 10:49:19.067650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Searching for f2:df:a:a6:c4:51 in /var/db/dhcpd_leases ...
	I0725 10:49:19.067771    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:49:19.067897    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:49:19.067935    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetConfigRaw
	I0725 10:49:19.067949    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:49:19.067969    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:49:19.067982    3774 main.go:141] libmachine: (ha-485000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e09c}
	I0725 10:49:19.068002    3774 main.go:141] libmachine: (ha-485000-m03) DBG | Found match: f2:df:a:a6:c4:51
	I0725 10:49:19.068014    3774 main.go:141] libmachine: (ha-485000-m03) DBG | IP: 192.169.0.7
	I0725 10:49:19.068702    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:19.069020    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:49:19.069625    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:49:19.069642    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:19.069816    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:19.069967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:19.070130    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070266    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:19.070381    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:19.070553    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:19.070752    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:19.070765    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:49:19.074618    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:49:19.083650    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:49:19.084521    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.084551    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.084569    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.084582    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.479143    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:49:19.479156    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:49:19.594100    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:49:19.594116    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:49:19.594124    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:49:19.594130    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:49:19.595151    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:49:19.595161    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:49:25.234237    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:49:25.234303    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:49:25.234325    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:49:25.257972    3774 main.go:141] libmachine: (ha-485000-m03) DBG | 2024/07/25 10:49:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:49:54.133594    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:49:54.133609    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133749    3774 buildroot.go:166] provisioning hostname "ha-485000-m03"
	I0725 10:49:54.133758    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.133861    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.133950    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.134036    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134136    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.134221    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.134384    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.134539    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.134553    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m03 && echo "ha-485000-m03" | sudo tee /etc/hostname
	I0725 10:49:54.200648    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m03
	
	I0725 10:49:54.200663    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.200790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.200879    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.200961    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.201044    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.201180    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.201420    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.201433    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:49:54.263039    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:49:54.263056    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:49:54.263070    3774 buildroot.go:174] setting up certificates
	I0725 10:49:54.263076    3774 provision.go:84] configureAuth start
	I0725 10:49:54.263083    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetMachineName
	I0725 10:49:54.263216    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:54.263306    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.263391    3774 provision.go:143] copyHostCerts
	I0725 10:49:54.263427    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263491    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:49:54.263497    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:49:54.263638    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:49:54.263837    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263878    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:49:54.263883    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:49:54.263964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:49:54.264113    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264157    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:49:54.264162    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:49:54.264238    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:49:54.264391    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m03 san=[127.0.0.1 192.169.0.7 ha-485000-m03 localhost minikube]
	I0725 10:49:54.466588    3774 provision.go:177] copyRemoteCerts
	I0725 10:49:54.466634    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:49:54.466649    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.466797    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.466896    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.466976    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.467051    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:54.501149    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:49:54.501228    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:49:54.520581    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:49:54.520648    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:49:54.540217    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:49:54.540294    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 10:49:54.559440    3774 provision.go:87] duration metric: took 296.351002ms to configureAuth
	I0725 10:49:54.559454    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:49:54.559628    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:49:54.559646    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:54.559774    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.559865    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.559954    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560037    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.560104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.560211    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.560343    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.560351    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:49:54.614691    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:49:54.614704    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:49:54.614776    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:49:54.614790    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.614925    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.615019    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615123    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.615214    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.615348    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.615499    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.615544    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:49:54.680274    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:49:54.680293    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:54.680409    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:54.680496    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680575    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:54.680666    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:54.680823    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:54.680965    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:54.680985    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:49:56.345971    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:49:56.345985    3774 machine.go:97] duration metric: took 37.275852067s to provisionDockerMachine
	I0725 10:49:56.345993    3774 start.go:293] postStartSetup for "ha-485000-m03" (driver="hyperkit")
	I0725 10:49:56.346001    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:49:56.346022    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.346236    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:49:56.346252    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.346356    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.346471    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.346553    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.346642    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.380977    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:49:56.384488    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:49:56.384501    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:49:56.384614    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:49:56.384806    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:49:56.384814    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:49:56.385027    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:49:56.392745    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:49:56.412613    3774 start.go:296] duration metric: took 66.60995ms for postStartSetup
	I0725 10:49:56.412634    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.412808    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:49:56.412819    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.412903    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.412988    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.413073    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.413150    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.446365    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:49:56.446421    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:49:56.498637    3774 fix.go:56] duration metric: took 37.575244811s for fixHost
	I0725 10:49:56.498667    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.498812    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.498919    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499016    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.499104    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.499238    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:49:56.499386    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0725 10:49:56.499396    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:49:56.555439    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929796.646350339
	
	I0725 10:49:56.555451    3774 fix.go:216] guest clock: 1721929796.646350339
	I0725 10:49:56.555456    3774 fix.go:229] Guest: 2024-07-25 10:49:56.646350339 -0700 PDT Remote: 2024-07-25 10:49:56.498656 -0700 PDT m=+90.525587748 (delta=147.694339ms)
	I0725 10:49:56.555467    3774 fix.go:200] guest clock delta is within tolerance: 147.694339ms
	I0725 10:49:56.555470    3774 start.go:83] releasing machines lock for "ha-485000-m03", held for 37.632106624s
	I0725 10:49:56.555487    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.555618    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:49:56.578528    3774 out.go:177] * Found network options:
	I0725 10:49:56.599071    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0725 10:49:56.620070    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.620097    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.620115    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620743    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.620967    3774 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:49:56.621083    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:49:56.621119    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	W0725 10:49:56.621187    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:49:56.621219    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:49:56.621317    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:49:56.621327    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621338    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:49:56.621512    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621541    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:49:56.621703    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.621726    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:49:56.621905    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:49:56.621918    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:49:56.622026    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	W0725 10:49:56.652986    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:49:56.653049    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:49:56.706201    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:49:56.706215    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.706281    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:56.721415    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:49:56.730471    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:49:56.739423    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:49:56.739473    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:49:56.748522    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.757654    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:49:56.766745    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:49:56.775683    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:49:56.785223    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:49:56.794261    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:49:56.803167    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:49:56.812374    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:49:56.820684    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:49:56.828942    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:56.928553    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:49:56.947129    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:49:56.947197    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:49:56.959097    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.970776    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:49:56.984568    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:49:56.994746    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.005192    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:49:57.026508    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:49:57.037792    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:49:57.052754    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:49:57.055697    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:49:57.062771    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:49:57.076283    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:49:57.171356    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:49:57.271094    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:49:57.271118    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:49:57.288365    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:49:57.388283    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:49:59.699253    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.310915582s)
	I0725 10:49:59.699313    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 10:49:59.710426    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:49:59.721257    3774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 10:49:59.814380    3774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 10:49:59.915914    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.023205    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 10:50:00.037086    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 10:50:00.048065    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:00.150126    3774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 10:50:00.211936    3774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 10:50:00.212049    3774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 10:50:00.216805    3774 start.go:563] Will wait 60s for crictl version
	I0725 10:50:00.216881    3774 ssh_runner.go:195] Run: which crictl
	I0725 10:50:00.220095    3774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 10:50:00.244551    3774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0725 10:50:00.244627    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.264059    3774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 10:50:00.304819    3774 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0725 10:50:00.346760    3774 out.go:177]   - env NO_PROXY=192.169.0.5
	I0725 10:50:00.368834    3774 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0725 10:50:00.394661    3774 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:50:00.395004    3774 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0725 10:50:00.399400    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:00.409863    3774 mustload.go:65] Loading cluster: ha-485000
	I0725 10:50:00.410047    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:00.410280    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.410302    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.419259    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51960
	I0725 10:50:00.419623    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.419945    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.419955    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.420150    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.420256    3774 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:50:00.420338    3774 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:00.420423    3774 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:50:00.421353    3774 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:50:00.421593    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:00.421616    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:00.430525    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51962
	I0725 10:50:00.430861    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:00.431198    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:00.431211    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:00.431432    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:00.431553    3774 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:50:00.431647    3774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000 for IP: 192.169.0.7
	I0725 10:50:00.431652    3774 certs.go:194] generating shared ca certs ...
	I0725 10:50:00.431664    3774 certs.go:226] acquiring lock for ca certs: {Name:mk9c2ae9585d6ecb7080ac84ff2a6c71f713722e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:50:00.431829    3774 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key
	I0725 10:50:00.431909    3774 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key
	I0725 10:50:00.431917    3774 certs.go:256] generating profile certs ...
	I0725 10:50:00.432022    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key
	I0725 10:50:00.432138    3774 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key.cfc1c64d
	I0725 10:50:00.432211    3774 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key
	I0725 10:50:00.432218    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 10:50:00.432239    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 10:50:00.432260    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 10:50:00.432278    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 10:50:00.432295    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 10:50:00.432331    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 10:50:00.432368    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 10:50:00.432392    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 10:50:00.432488    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem (1338 bytes)
	W0725 10:50:00.432538    3774 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732_empty.pem, impossibly tiny 0 bytes
	I0725 10:50:00.432546    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 10:50:00.432580    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem (1078 bytes)
	I0725 10:50:00.432612    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem (1123 bytes)
	I0725 10:50:00.432641    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem (1679 bytes)
	I0725 10:50:00.432707    3774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:00.432744    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.432766    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem -> /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.432786    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.432812    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:50:00.432904    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:50:00.432984    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:50:00.433067    3774 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:50:00.433144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:50:00.457759    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 10:50:00.461662    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 10:50:00.470495    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 10:50:00.474031    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 10:50:00.483513    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 10:50:00.486473    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 10:50:00.495156    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 10:50:00.498306    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0725 10:50:00.507713    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 10:50:00.510936    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 10:50:00.519399    3774 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 10:50:00.523489    3774 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0725 10:50:00.532193    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 10:50:00.552954    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 10:50:00.573061    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 10:50:00.593555    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 10:50:00.613482    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 10:50:00.633390    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 10:50:00.653522    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 10:50:00.673721    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 10:50:00.693637    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 10:50:00.713744    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/1732.pem --> /usr/share/ca-certificates/1732.pem (1338 bytes)
	I0725 10:50:00.733957    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /usr/share/ca-certificates/17322.pem (1708 bytes)
	I0725 10:50:00.753667    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 10:50:00.767462    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 10:50:00.781289    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 10:50:00.795165    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0725 10:50:00.808987    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 10:50:00.823098    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0725 10:50:00.836829    3774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 10:50:00.850678    3774 ssh_runner.go:195] Run: openssl version
	I0725 10:50:00.854970    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17322.pem && ln -fs /usr/share/ca-certificates/17322.pem /etc/ssl/certs/17322.pem"
	I0725 10:50:00.863536    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867064    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:38 /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.867101    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17322.pem
	I0725 10:50:00.871309    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17322.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 10:50:00.879945    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 10:50:00.888535    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892087    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.892134    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 10:50:00.896459    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 10:50:00.905152    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1732.pem && ln -fs /usr/share/ca-certificates/1732.pem /etc/ssl/certs/1732.pem"
	I0725 10:50:00.913558    3774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917014    3774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:38 /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.917052    3774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1732.pem
	I0725 10:50:00.921381    3774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1732.pem /etc/ssl/certs/51391683.0"
	I0725 10:50:00.929938    3774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 10:50:00.933439    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 10:50:00.937823    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 10:50:00.942083    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 10:50:00.946333    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 10:50:00.950751    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 10:50:00.954972    3774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 10:50:00.959334    3774 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0725 10:50:00.959393    3774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-485000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-485000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 10:50:00.959413    3774 kube-vip.go:115] generating kube-vip config ...
	I0725 10:50:00.959451    3774 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 10:50:00.974500    3774 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 10:50:00.974542    3774 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 10:50:00.974599    3774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 10:50:00.983391    3774 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 10:50:00.983448    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 10:50:00.992451    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0725 10:50:01.006435    3774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 10:50:01.020173    3774 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0725 10:50:01.034556    3774 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0725 10:50:01.037605    3774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 10:50:01.047456    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.141578    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.156504    3774 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 10:50:01.156711    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:01.178105    3774 out.go:177] * Verifying Kubernetes components...
	I0725 10:50:01.219418    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:01.336829    3774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 10:50:01.353501    3774 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:50:01.353708    3774 kapi.go:59] client config for ha-485000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1195/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe203400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 10:50:01.353744    3774 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0725 10:50:01.353906    3774 node_ready.go:35] waiting up to 6m0s for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.353944    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:01.353949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.353955    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.353958    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.356860    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.357167    3774 node_ready.go:49] node "ha-485000-m03" has status "Ready":"True"
	I0725 10:50:01.357178    3774 node_ready.go:38] duration metric: took 3.262682ms for node "ha-485000-m03" to be "Ready" ...
	I0725 10:50:01.357193    3774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:01.357239    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:01.357246    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.357252    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.357257    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.362406    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:01.367672    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:01.367737    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.367747    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.367754    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.367758    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.370874    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:01.372501    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.372645    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.372667    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.372873    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.375539    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.868232    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:01.868248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.868255    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.868258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.870654    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:01.871201    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:01.871209    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:01.871215    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:01.871218    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:01.873125    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:02.368740    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.368762    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.368777    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.368783    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.377869    3774 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0725 10:50:02.379271    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.379283    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.379290    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.379293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.385746    3774 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 10:50:02.869348    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:02.869364    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.869371    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.869375    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.871499    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:02.872136    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:02.872144    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:02.872150    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:02.872155    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:02.874538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.367877    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.367889    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.367896    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.367899    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.371090    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:03.371735    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.371743    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.371750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.371753    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.374291    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.374875    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:03.869639    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:03.869654    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.869661    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.869665    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.871755    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:03.872260    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:03.872269    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:03.872275    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:03.872280    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:03.874379    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.369638    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.369657    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.369703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.369708    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.372772    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:04.373269    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.373277    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.373282    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.373299    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.375712    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.869487    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:04.869502    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.869508    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.869512    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.871992    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:04.872445    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:04.872453    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:04.872459    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:04.872463    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:04.874498    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.368695    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.368711    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.368717    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.368721    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.370818    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.371324    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.371336    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.371342    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.371346    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.373135    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.869401    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:05.869464    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.869473    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.869478    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.871690    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:05.872113    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:05.872121    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:05.872153    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:05.872158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:05.874010    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:05.874326    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:06.369093    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.369156    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.369165    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.369170    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372202    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:06.372616    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.372624    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.372630    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.372639    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.374312    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:06.869508    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:06.869524    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.869531    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.869536    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.872066    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:06.872572    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:06.872580    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:06.872586    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:06.872589    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:06.874648    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.368133    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.368154    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.368178    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.368182    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.370664    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:07.371146    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.371153    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.371158    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.371165    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.372921    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.868556    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:07.868570    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.868576    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.868580    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.870520    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:07.871027    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:07.871035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:07.871041    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:07.871044    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:07.872683    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.368226    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.368242    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.368249    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.368253    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.370410    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.370844    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.370852    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.370858    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.370862    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.372615    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:08.373006    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:08.868118    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:08.868176    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.868187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.868194    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.870430    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:08.870901    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:08.870909    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:08.870915    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:08.870919    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:08.872656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.368941    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.368959    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.368966    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.368970    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.371000    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.371430    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.371438    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.371444    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.371447    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.372991    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:09.868002    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:09.868047    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.868058    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.868062    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.870388    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:09.870824    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:09.870832    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:09.870838    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:09.870842    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:09.872677    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.369049    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.369064    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.369071    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.369074    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.371043    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.371476    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.371483    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.371489    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.371492    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.372986    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:10.373378    3774 pod_ready.go:102] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:10.869188    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:10.869207    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.869244    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.869251    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.871587    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:10.872055    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:10.872063    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:10.872068    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:10.872071    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:10.873650    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.368644    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dv8wr
	I0725 10:50:11.368660    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.368670    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.368674    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.370971    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.371396    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.371404    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.371410    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.371414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.373238    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.373609    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.373619    3774 pod_ready.go:81] duration metric: took 10.005797843s for pod "coredns-7db6d8ff4d-dv8wr" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373657    3774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.373710    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-pnmm6
	I0725 10:50:11.373716    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.373722    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.373728    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.375656    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.376026    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.376035    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.376041    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.376044    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378143    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.378749    3774 pod_ready.go:92] pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.378758    3774 pod_ready.go:81] duration metric: took 5.088497ms for pod "coredns-7db6d8ff4d-pnmm6" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378765    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.378806    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000
	I0725 10:50:11.378810    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.378816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.378820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.380851    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.381363    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:11.381371    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.381377    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.381381    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.383335    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.383692    3774 pod_ready.go:92] pod "etcd-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.383702    3774 pod_ready.go:81] duration metric: took 4.931732ms for pod "etcd-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383708    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.383749    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m02
	I0725 10:50:11.383754    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.383760    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.383764    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.385637    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.386081    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:11.386088    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.386094    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.386097    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.388000    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.388355    3774 pod_ready.go:92] pod "etcd-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:11.388366    3774 pod_ready.go:81] duration metric: took 4.652083ms for pod "etcd-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388374    3774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:11.388416    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.388421    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.388427    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.388431    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.390189    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:11.390609    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.390617    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.390622    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.390633    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.392651    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.889083    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:11.889128    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.889138    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.889143    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.891738    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:11.892209    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:11.892216    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:11.892221    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:11.892223    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:11.893988    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.390496    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.390512    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.390518    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.390521    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.392470    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.392850    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.392858    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.392864    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.392869    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.394421    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:12.889565    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:12.889659    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.889673    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.889678    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.892819    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:12.893389    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:12.893400    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:12.893409    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:12.893414    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:12.895094    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.389794    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.389809    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.389816    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.389820    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.391840    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.392224    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.392231    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.392237    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.392241    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.393832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:13.394204    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:13.889796    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:13.889812    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.889823    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.892206    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:13.892725    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:13.892732    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:13.892737    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:13.892747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:13.894556    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.390135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.390150    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.390156    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.390160    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.392174    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:14.392583    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.392590    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.392596    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.392612    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.394268    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.889560    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:14.889573    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.889579    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.889582    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.891400    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:14.891841    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:14.891848    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:14.891854    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:14.891860    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:14.893620    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.388714    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.388793    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.388820    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.388827    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.391756    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:15.392117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.392125    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.392134    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.392139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.393815    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.890117    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:15.890137    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.890149    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.890157    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.893399    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:15.894228    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:15.894235    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:15.894241    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:15.894245    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:15.895912    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:15.896234    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:16.389894    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.389912    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.389920    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.389924    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.391955    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:16.392480    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.392488    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.392493    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.392505    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.394310    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:16.889781    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:16.889806    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.889818    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.889826    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.893556    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:16.894315    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:16.894326    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:16.894333    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:16.894337    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:16.896126    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.388575    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.388587    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.388602    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.388605    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.390595    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.391157    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.391165    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.391170    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.391173    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.392829    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:17.889351    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:17.889367    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.889373    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.889376    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.891538    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:17.891973    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:17.891981    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:17.891987    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:17.891990    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:17.893775    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.389267    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.389290    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.389304    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.389312    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.392450    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:18.392859    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.392867    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.392872    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.392875    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.394522    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:18.394948    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:18.890099    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:18.890111    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.890118    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.890121    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.892565    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:18.892967    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:18.892975    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:18.892981    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:18.892985    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:18.894937    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.389827    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.389924    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.389936    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.389942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.392795    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.393525    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.393536    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.393544    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.393553    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.395412    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:19.889931    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:19.889949    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.889958    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.889962    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.892495    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:19.893008    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:19.893015    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:19.893021    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:19.893024    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:19.894590    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.390037    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.390057    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.390068    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.390074    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.393277    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:20.393997    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.394008    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.394016    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.394021    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.395790    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.396084    3774 pod_ready.go:102] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"False"
	I0725 10:50:20.889112    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-485000-m03
	I0725 10:50:20.889127    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.889135    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.889139    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.891727    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.892142    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.892149    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.892155    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.892158    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.893897    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.894418    3774 pod_ready.go:92] pod "etcd-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.894427    3774 pod_ready.go:81] duration metric: took 9.505922344s for pod "etcd-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894440    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.894470    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000
	I0725 10:50:20.894475    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.894481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.894485    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.896512    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:20.897090    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.897097    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.897103    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.897106    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.898896    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.899262    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.899272    3774 pod_ready.go:81] duration metric: took 4.826573ms for pod "kube-apiserver-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899278    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.899309    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m02
	I0725 10:50:20.899314    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.899319    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.899324    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.901127    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.901676    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:20.901683    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.901689    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.901692    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.903167    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.903492    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.903501    3774 pod_ready.go:81] duration metric: took 4.217035ms for pod "kube-apiserver-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903507    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.903535    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-485000-m03
	I0725 10:50:20.903539    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.903548    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.903554    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.905060    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.905423    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:20.905430    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.905435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.905438    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.906893    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.907231    3774 pod_ready.go:92] pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.907242    3774 pod_ready.go:81] duration metric: took 3.730011ms for pod "kube-apiserver-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907249    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.907283    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000
	I0725 10:50:20.907288    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.907293    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.907298    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.908832    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.909193    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:20.909200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:20.909206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:20.909211    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:20.910822    3774 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 10:50:20.911148    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:20.911157    3774 pod_ready.go:81] duration metric: took 3.903336ms for pod "kube-controller-manager-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:20.911164    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.090249    3774 request.go:629] Waited for 179.043752ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090326    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m02
	I0725 10:50:21.090337    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.090348    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.090357    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.093923    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:21.289919    3774 request.go:629] Waited for 195.572332ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289952    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:21.289956    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.289963    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.289968    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.292331    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.292914    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.292923    3774 pod_ready.go:81] duration metric: took 381.74891ms for pod "kube-controller-manager-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.292930    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.490166    3774 request.go:629] Waited for 197.198888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490242    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-485000-m03
	I0725 10:50:21.490248    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.490254    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.490258    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.492317    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.690619    3774 request.go:629] Waited for 197.80318ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690692    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:21.690700    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.690707    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.690712    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.693156    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:21.693612    3774 pod_ready.go:92] pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:21.693621    3774 pod_ready.go:81] duration metric: took 400.680496ms for pod "kube-controller-manager-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.693628    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:21.889496    3774 request.go:629] Waited for 195.831751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889578    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65n48
	I0725 10:50:21.889584    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:21.889590    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:21.889594    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:21.891941    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.089475    3774 request.go:629] Waited for 196.98259ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089532    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:22.089542    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.089554    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.089562    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.092579    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.093142    3774 pod_ready.go:92] pod "kube-proxy-65n48" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.093152    3774 pod_ready.go:81] duration metric: took 399.51252ms for pod "kube-proxy-65n48" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.093159    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.289233    3774 request.go:629] Waited for 195.965994ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289304    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w8bj
	I0725 10:50:22.289317    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.289329    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.289336    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.292489    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.491051    3774 request.go:629] Waited for 197.933507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491135    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:22.491147    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.491172    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.491181    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.494764    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:22.495189    3774 pod_ready.go:92] pod "kube-proxy-9w8bj" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.495199    3774 pod_ready.go:81] duration metric: took 402.028626ms for pod "kube-proxy-9w8bj" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.495206    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.690648    3774 request.go:629] Waited for 195.401863ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690726    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dc5jq
	I0725 10:50:22.690734    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.690741    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.690747    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.693258    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.889677    3774 request.go:629] Waited for 195.917412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889724    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:22.889761    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:22.889767    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:22.889773    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:22.892446    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:22.892903    3774 pod_ready.go:92] pod "kube-proxy-dc5jq" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:22.892913    3774 pod_ready.go:81] duration metric: took 397.69615ms for pod "kube-proxy-dc5jq" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:22.892920    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.090761    3774 request.go:629] Waited for 197.803166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090814    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mvbkh
	I0725 10:50:23.090819    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.090826    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.090831    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.093064    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.290673    3774 request.go:629] Waited for 196.835784ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290721    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m04
	I0725 10:50:23.290730    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.290752    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.290763    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.293787    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:23.294164    3774 pod_ready.go:97] node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294178    3774 pod_ready.go:81] duration metric: took 401.248534ms for pod "kube-proxy-mvbkh" in "kube-system" namespace to be "Ready" ...
	E0725 10:50:23.294187    3774 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-485000-m04" hosting pod "kube-proxy-mvbkh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-485000-m04" has status "Ready":"Unknown"
	I0725 10:50:23.294199    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.490423    3774 request.go:629] Waited for 196.114187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490462    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000
	I0725 10:50:23.490468    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.490475    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.490481    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.493175    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.689290    3774 request.go:629] Waited for 195.78427ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689415    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000
	I0725 10:50:23.689424    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.689435    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.689442    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.692307    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:23.692985    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:23.692998    3774 pod_ready.go:81] duration metric: took 398.785835ms for pod "kube-scheduler-ha-485000" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.693009    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:23.890957    3774 request.go:629] Waited for 197.902466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891038    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m02
	I0725 10:50:23.891044    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:23.891050    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:23.891055    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:23.893361    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.090079    3774 request.go:629] Waited for 196.359986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090164    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m02
	I0725 10:50:24.090175    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.090187    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.090195    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.094081    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.094638    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.094651    3774 pod_ready.go:81] duration metric: took 401.630539ms for pod "kube-scheduler-ha-485000-m02" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.094660    3774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.290893    3774 request.go:629] Waited for 196.185136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291015    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-485000-m03
	I0725 10:50:24.291026    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.291038    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.291045    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.294065    3774 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 10:50:24.489582    3774 request.go:629] Waited for 194.956133ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489621    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-485000-m03
	I0725 10:50:24.489639    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.489681    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.489688    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.492299    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.492641    3774 pod_ready.go:92] pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 10:50:24.492651    3774 pod_ready.go:81] duration metric: took 397.980834ms for pod "kube-scheduler-ha-485000-m03" in "kube-system" namespace to be "Ready" ...
	I0725 10:50:24.492659    3774 pod_ready.go:38] duration metric: took 23.135149255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 10:50:24.492671    3774 api_server.go:52] waiting for apiserver process to appear ...
	I0725 10:50:24.492734    3774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:50:24.505745    3774 api_server.go:72] duration metric: took 23.348906591s to wait for apiserver process to appear ...
	I0725 10:50:24.505757    3774 api_server.go:88] waiting for apiserver healthz status ...
	I0725 10:50:24.505768    3774 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0725 10:50:24.508893    3774 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0725 10:50:24.508926    3774 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0725 10:50:24.508931    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.508937    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.508942    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.509529    3774 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0725 10:50:24.509577    3774 api_server.go:141] control plane version: v1.30.3
	I0725 10:50:24.509588    3774 api_server.go:131] duration metric: took 3.825788ms to wait for apiserver health ...
	I0725 10:50:24.509593    3774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 10:50:24.689653    3774 request.go:629] Waited for 180.026101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689691    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:24.689696    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.689703    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.689707    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.694162    3774 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 10:50:24.699605    3774 system_pods.go:59] 26 kube-system pods found
	I0725 10:50:24.699621    3774 system_pods.go:61] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:24.699626    3774 system_pods.go:61] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:24.699629    3774 system_pods.go:61] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:24.699632    3774 system_pods.go:61] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:24.699635    3774 system_pods.go:61] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:24.699638    3774 system_pods.go:61] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:24.699641    3774 system_pods.go:61] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:24.699644    3774 system_pods.go:61] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:24.699647    3774 system_pods.go:61] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:24.699651    3774 system_pods.go:61] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:24.699654    3774 system_pods.go:61] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:24.699657    3774 system_pods.go:61] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:24.699660    3774 system_pods.go:61] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:24.699662    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:24.699665    3774 system_pods.go:61] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:24.699668    3774 system_pods.go:61] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:24.699670    3774 system_pods.go:61] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:24.699672    3774 system_pods.go:61] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:24.699676    3774 system_pods.go:61] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:24.699680    3774 system_pods.go:61] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:24.699682    3774 system_pods.go:61] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:24.699685    3774 system_pods.go:61] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:24.699687    3774 system_pods.go:61] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:24.699690    3774 system_pods.go:61] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:24.699692    3774 system_pods.go:61] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:24.699697    3774 system_pods.go:61] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:24.699701    3774 system_pods.go:74] duration metric: took 190.102528ms to wait for pod list to return data ...
	I0725 10:50:24.699707    3774 default_sa.go:34] waiting for default service account to be created ...
	I0725 10:50:24.890642    3774 request.go:629] Waited for 190.892646ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890727    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0725 10:50:24.890735    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:24.890743    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:24.890750    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:24.893133    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:24.893199    3774 default_sa.go:45] found service account: "default"
	I0725 10:50:24.893209    3774 default_sa.go:55] duration metric: took 193.493853ms for default service account to be created ...
	I0725 10:50:24.893214    3774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 10:50:25.089856    3774 request.go:629] Waited for 196.580095ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089959    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0725 10:50:25.089969    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.089980    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.089989    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.095665    3774 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 10:50:25.100358    3774 system_pods.go:86] 26 kube-system pods found
	I0725 10:50:25.100371    3774 system_pods.go:89] "coredns-7db6d8ff4d-dv8wr" [ec96cdaf-7947-47eb-86f2-fe8d0abe4df0] Running
	I0725 10:50:25.100375    3774 system_pods.go:89] "coredns-7db6d8ff4d-pnmm6" [dfa8b197-af66-4d93-a36a-4bae553fc645] Running
	I0725 10:50:25.100378    3774 system_pods.go:89] "etcd-ha-485000" [4e45706d-8978-4a63-a143-f883f10858c3] Running
	I0725 10:50:25.100382    3774 system_pods.go:89] "etcd-ha-485000-m02" [f374a36e-0ff8-4594-bfde-18eafefc7d1b] Running
	I0725 10:50:25.100386    3774 system_pods.go:89] "etcd-ha-485000-m03" [46cfa75e-45cd-435a-a82a-526c09d1c4df] Running
	I0725 10:50:25.100389    3774 system_pods.go:89] "kindnet-2t428" [423220b2-da23-448e-8122-64e40acd4691] Running
	I0725 10:50:25.100393    3774 system_pods.go:89] "kindnet-bhkpj" [542a587d-241a-45fd-8e2a-753dae5a0fb8] Running
	I0725 10:50:25.100396    3774 system_pods.go:89] "kindnet-cq6bp" [f2a36380-2980-4faf-b814-d1ec1200ce84] Running
	I0725 10:50:25.100400    3774 system_pods.go:89] "kindnet-mvblc" [68c9f775-6a75-49ea-a5a8-4d723af5bbbc] Running
	I0725 10:50:25.100415    3774 system_pods.go:89] "kube-apiserver-ha-485000" [e5224687-a4b4-4352-bda0-428f319b9c05] Running
	I0725 10:50:25.100424    3774 system_pods.go:89] "kube-apiserver-ha-485000-m02" [5020aa28-687f-4fa9-896c-f31b65a7ab48] Running
	I0725 10:50:25.100429    3774 system_pods.go:89] "kube-apiserver-ha-485000-m03" [2082d046-e12f-4d92-9d96-72a2fd41ff2c] Running
	I0725 10:50:25.100435    3774 system_pods.go:89] "kube-controller-manager-ha-485000" [ee7e407e-239c-4c9e-b398-e72d9d5848ed] Running
	I0725 10:50:25.100453    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m02" [4128bb22-70cc-43d6-8c13-c17c92bde67f] Running
	I0725 10:50:25.100457    3774 system_pods.go:89] "kube-controller-manager-ha-485000-m03" [a5ecf268-552e-483c-92d1-46ee82561830] Running
	I0725 10:50:25.100460    3774 system_pods.go:89] "kube-proxy-65n48" [c430bcf4-2877-4d2e-a2b6-285996633e92] Running
	I0725 10:50:25.100465    3774 system_pods.go:89] "kube-proxy-9w8bj" [400eed26-954b-40b6-9114-104075954d43] Running
	I0725 10:50:25.100469    3774 system_pods.go:89] "kube-proxy-dc5jq" [6769272c-3965-4b11-ae2e-85f8b6a2d0f4] Running
	I0725 10:50:25.100472    3774 system_pods.go:89] "kube-proxy-mvbkh" [30908c7f-171c-4d89-8f93-69e304ca0ef0] Running
	I0725 10:50:25.100476    3774 system_pods.go:89] "kube-scheduler-ha-485000" [8c5ab86c-58ba-447a-8187-ec981654b255] Running
	I0725 10:50:25.100481    3774 system_pods.go:89] "kube-scheduler-ha-485000-m02" [1eae6c16-8573-4dde-a753-2aa3525772e2] Running
	I0725 10:50:25.100485    3774 system_pods.go:89] "kube-scheduler-ha-485000-m03" [99835e09-e8ee-43dc-8acf-811cf72224ca] Running
	I0725 10:50:25.100489    3774 system_pods.go:89] "kube-vip-ha-485000" [d13235f9-c36c-4988-8638-4964a718c070] Running
	I0725 10:50:25.100492    3774 system_pods.go:89] "kube-vip-ha-485000-m02" [3ffe906b-c3e7-416b-8b05-3a41ce0932ad] Running
	I0725 10:50:25.100495    3774 system_pods.go:89] "kube-vip-ha-485000-m03" [9d8f63a5-1318-441c-9b60-0afe5b20c9fd] Running
	I0725 10:50:25.100501    3774 system_pods.go:89] "storage-provisioner" [9d25ef1d-567f-4da3-a62e-16958da26713] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 10:50:25.100509    3774 system_pods.go:126] duration metric: took 207.289229ms to wait for k8s-apps to be running ...
	I0725 10:50:25.100516    3774 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 10:50:25.100565    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:50:25.111946    3774 system_svc.go:56] duration metric: took 11.425609ms WaitForService to wait for kubelet
	I0725 10:50:25.111960    3774 kubeadm.go:582] duration metric: took 23.955115877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:50:25.111982    3774 node_conditions.go:102] verifying NodePressure condition ...
	I0725 10:50:25.290128    3774 request.go:629] Waited for 178.101329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290194    3774 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0725 10:50:25.290200    3774 round_trippers.go:469] Request Headers:
	I0725 10:50:25.290206    3774 round_trippers.go:473]     Accept: application/json, */*
	I0725 10:50:25.290210    3774 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0725 10:50:25.292310    3774 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 10:50:25.293114    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293124    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293137    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293141    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293144    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293147    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293150    3774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 10:50:25.293153    3774 node_conditions.go:123] node cpu capacity is 2
	I0725 10:50:25.293156    3774 node_conditions.go:105] duration metric: took 181.167189ms to run NodePressure ...
	I0725 10:50:25.293164    3774 start.go:241] waiting for startup goroutines ...
	I0725 10:50:25.293180    3774 start.go:255] writing updated cluster config ...
	I0725 10:50:25.316414    3774 out.go:177] 
	I0725 10:50:25.353963    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:25.354081    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.376345    3774 out.go:177] * Starting "ha-485000-m04" worker node in "ha-485000" cluster
	I0725 10:50:25.418264    3774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:50:25.418295    3774 cache.go:56] Caching tarball of preloaded images
	I0725 10:50:25.418479    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 10:50:25.418492    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:50:25.418579    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.419248    3774 start.go:360] acquireMachinesLock for ha-485000-m04: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:50:25.419314    3774 start.go:364] duration metric: took 49.541µs to acquireMachinesLock for "ha-485000-m04"
	I0725 10:50:25.419336    3774 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:50:25.419342    3774 fix.go:54] fixHost starting: m04
	I0725 10:50:25.419646    3774 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:50:25.419671    3774 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:50:25.428557    3774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51966
	I0725 10:50:25.428876    3774 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:50:25.429185    3774 main.go:141] libmachine: Using API Version  1
	I0725 10:50:25.429196    3774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:50:25.429408    3774 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:50:25.429520    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.429598    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:50:25.429683    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.429771    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3386
	I0725 10:50:25.430679    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid 3386 missing from process table
	I0725 10:50:25.430699    3774 fix.go:112] recreateIfNeeded on ha-485000-m04: state=Stopped err=<nil>
	I0725 10:50:25.430707    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	W0725 10:50:25.430787    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:50:25.451265    3774 out.go:177] * Restarting existing hyperkit VM for "ha-485000-m04" ...
	I0725 10:50:25.492428    3774 main.go:141] libmachine: (ha-485000-m04) Calling .Start
	I0725 10:50:25.492574    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.492592    3774 main.go:141] libmachine: (ha-485000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid
	I0725 10:50:25.492648    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Using UUID c1175b3a-154e-40e8-a691-44a5e1615e54
	I0725 10:50:25.517781    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Generated MAC ba:e9:ef:e5:fe:75
	I0725 10:50:25.517800    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000
	I0725 10:50:25.517982    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518030    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c1175b3a-154e-40e8-a691-44a5e1615e54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 10:50:25.518090    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c1175b3a-154e-40e8-a691-44a5e1615e54", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machine
s/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"}
	I0725 10:50:25.518138    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c1175b3a-154e-40e8-a691-44a5e1615e54 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/ha-485000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-485000"
	I0725 10:50:25.518152    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 10:50:25.519588    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 DEBUG: hyperkit: Pid is 3811
	I0725 10:50:25.520056    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Attempt 0
	I0725 10:50:25.520068    3774 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:50:25.520140    3774 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3811
	I0725 10:50:25.521299    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Searching for ba:e9:ef:e5:fe:75 in /var/db/dhcpd_leases ...
	I0725 10:50:25.521358    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0725 10:50:25.521373    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a3e1a8}
	I0725 10:50:25.521401    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e185}
	I0725 10:50:25.521437    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e173}
	I0725 10:50:25.521449    3774 main.go:141] libmachine: (ha-485000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a28fd0}
	I0725 10:50:25.521464    3774 main.go:141] libmachine: (ha-485000-m04) DBG | Found match: ba:e9:ef:e5:fe:75
	I0725 10:50:25.521477    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetConfigRaw
	I0725 10:50:25.521524    3774 main.go:141] libmachine: (ha-485000-m04) DBG | IP: 192.169.0.8
	I0725 10:50:25.522369    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:25.522631    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/ha-485000/config.json ...
	I0725 10:50:25.523230    3774 machine.go:94] provisionDockerMachine start ...
	I0725 10:50:25.523241    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:25.523348    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:25.523441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:25.523528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:25.523711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:25.523847    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:25.524050    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:25.524062    3774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 10:50:25.527797    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 10:50:25.536120    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 10:50:25.537142    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:25.537161    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:25.537173    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:25.537185    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:25.927659    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 10:50:25.927675    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 10:50:26.042400    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 10:50:26.042420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 10:50:26.042429    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 10:50:26.042435    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 10:50:26.043251    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 10:50:26.043265    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 10:50:31.642036    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 10:50:31.642050    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 10:50:31.642059    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 10:50:31.665420    3774 main.go:141] libmachine: (ha-485000-m04) DBG | 2024/07/25 10:50:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 10:50:36.584591    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 10:50:36.584607    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584735    3774 buildroot.go:166] provisioning hostname "ha-485000-m04"
	I0725 10:50:36.584758    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.584843    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.584928    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.585028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585116    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.585203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.585343    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.585486    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.585494    3774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-485000-m04 && echo "ha-485000-m04" | sudo tee /etc/hostname
	I0725 10:50:36.650595    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-485000-m04
	
	I0725 10:50:36.650612    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.650747    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.650856    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.650943    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.651028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.651142    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:36.651292    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:36.651304    3774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-485000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-485000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-485000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 10:50:36.714348    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 10:50:36.714365    3774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 10:50:36.714375    3774 buildroot.go:174] setting up certificates
	I0725 10:50:36.714381    3774 provision.go:84] configureAuth start
	I0725 10:50:36.714387    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetMachineName
	I0725 10:50:36.714525    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:36.714638    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.714724    3774 provision.go:143] copyHostCerts
	I0725 10:50:36.714755    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714823    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 10:50:36.714829    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 10:50:36.714964    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 10:50:36.715183    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715234    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 10:50:36.715240    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 10:50:36.715334    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 10:50:36.715487    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715532    3774 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 10:50:36.715542    3774 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 10:50:36.715621    3774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 10:50:36.715776    3774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.ha-485000-m04 san=[127.0.0.1 192.169.0.8 ha-485000-m04 localhost minikube]
	I0725 10:50:36.928446    3774 provision.go:177] copyRemoteCerts
	I0725 10:50:36.928501    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 10:50:36.928518    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:36.928667    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:36.928766    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:36.928853    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:36.928937    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:36.963525    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 10:50:36.963602    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 10:50:36.983132    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 10:50:36.983215    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 10:50:37.002467    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 10:50:37.002541    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 10:50:37.021539    3774 provision.go:87] duration metric: took 307.147237ms to configureAuth
	I0725 10:50:37.021553    3774 buildroot.go:189] setting minikube options for container-runtime
	I0725 10:50:37.021739    3774 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:37.021752    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:37.021873    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.021953    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.022028    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022114    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.022191    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.022294    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.022425    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.022432    3774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 10:50:37.077868    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 10:50:37.077881    3774 buildroot.go:70] root file system type: tmpfs
	I0725 10:50:37.077955    3774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 10:50:37.077968    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.078088    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.078184    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078265    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.078353    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.078490    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.078627    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.078675    3774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 10:50:37.144185    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 10:50:37.144203    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:37.144342    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:37.144434    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144523    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:37.144614    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:37.144742    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:37.144915    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:37.144928    3774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 10:50:38.716011    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 10:50:38.716027    3774 machine.go:97] duration metric: took 13.192614029s to provisionDockerMachine
	I0725 10:50:38.716035    3774 start.go:293] postStartSetup for "ha-485000-m04" (driver="hyperkit")
	I0725 10:50:38.716042    3774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 10:50:38.716057    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.716243    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 10:50:38.716257    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.716357    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.716441    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.716528    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.716625    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.757562    3774 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 10:50:38.761046    3774 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 10:50:38.761061    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 10:50:38.761168    3774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 10:50:38.761354    3774 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 10:50:38.761360    3774 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> /etc/ssl/certs/17322.pem
	I0725 10:50:38.761571    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 10:50:38.769992    3774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 10:50:38.800110    3774 start.go:296] duration metric: took 84.064957ms for postStartSetup
	I0725 10:50:38.800132    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.800311    3774 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 10:50:38.800325    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.800410    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.800488    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.800581    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.800667    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:38.835801    3774 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0725 10:50:38.835861    3774 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0725 10:50:38.889497    3774 fix.go:56] duration metric: took 13.469972078s for fixHost
	I0725 10:50:38.889527    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:38.889666    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:38.889774    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:38.889955    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:38.890084    3774 main.go:141] libmachine: Using SSH client type: native
	I0725 10:50:38.890230    3774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcd5e0c0] 0xcd60e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0725 10:50:38.890238    3774 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 10:50:38.946246    3774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929837.991587228
	
	I0725 10:50:38.946261    3774 fix.go:216] guest clock: 1721929837.991587228
	I0725 10:50:38.946267    3774 fix.go:229] Guest: 2024-07-25 10:50:37.991587228 -0700 PDT Remote: 2024-07-25 10:50:38.889513 -0700 PDT m=+132.915879826 (delta=-897.925772ms)
	I0725 10:50:38.946278    3774 fix.go:200] guest clock delta is within tolerance: -897.925772ms
	I0725 10:50:38.946282    3774 start.go:83] releasing machines lock for "ha-485000-m04", held for 13.526780386s
	I0725 10:50:38.946300    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:38.946427    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:50:38.970830    3774 out.go:177] * Found network options:
	I0725 10:50:38.991565    3774 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0725 10:50:39.012683    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012705    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.012716    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.012729    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013273    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013419    3774 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:50:39.013516    3774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 10:50:39.013540    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	W0725 10:50:39.013573    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013595    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 10:50:39.013611    3774 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 10:50:39.013662    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013696    3774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 10:50:39.013711    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:50:39.013838    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:50:39.013857    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014008    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:50:39.014022    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014127    3774 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:50:39.014144    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:50:39.014253    3774 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	W0725 10:50:39.045644    3774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 10:50:39.045702    3774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 10:50:39.092083    3774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 10:50:39.092097    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.092166    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.107318    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 10:50:39.116414    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 10:50:39.125304    3774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.125351    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 10:50:39.134448    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.143660    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 10:50:39.152627    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 10:50:39.161628    3774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 10:50:39.170919    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 10:50:39.179944    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 10:50:39.189118    3774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 10:50:39.198245    3774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 10:50:39.206354    3774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 10:50:39.214527    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.316588    3774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 10:50:39.336368    3774 start.go:495] detecting cgroup driver to use...
	I0725 10:50:39.336436    3774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 10:50:39.358127    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.374161    3774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 10:50:39.391711    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 10:50:39.402592    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.413153    3774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 10:50:39.435734    3774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 10:50:39.446272    3774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 10:50:39.461753    3774 ssh_runner.go:195] Run: which cri-dockerd
	I0725 10:50:39.464748    3774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 10:50:39.475362    3774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 10:50:39.488985    3774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 10:50:39.584434    3774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 10:50:39.693634    3774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 10:50:39.693658    3774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 10:50:39.707727    3774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 10:50:39.811725    3774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 10:51:40.826240    3774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01368475s)
	I0725 10:51:40.826323    3774 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 10:51:40.860544    3774 out.go:177] 
	W0725 10:51:40.881235    3774 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 17:50:36 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491525254Z" level=info msg="Starting up"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.491953665Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 17:50:36 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:36.492498920Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.507900444Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.528985080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529119487Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529190184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529227367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529382906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529449495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529613424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529662631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529706898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529742376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.529932288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.530230915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531799836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.531853394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532013423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532060003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532224150Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.532284958Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533667564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533767716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533813069Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533911547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.533958502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534060695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534298445Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534406555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534446041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534478140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534510008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534540807Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534571096Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534602037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534636987Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534677965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534711495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534740942Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534776402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534811869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534850836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534886582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534922068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534956302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.534986255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535016075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535048332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535088208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535121326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535150501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535236221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535275301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535313456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535345038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535373791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535440294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535482913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535628751Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535672410Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535703274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535734362Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535762669Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.535960514Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536080093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536142938Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 17:50:36 ha-485000-m04 dockerd[495]: time="2024-07-25T17:50:36.536179093Z" level=info msg="containerd successfully booted in 0.029080s"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.510927923Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.523832948Z" level=info msg="Loading containers: start."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.618418659Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.680635969Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.724258639Z" level=info msg="Loading containers: done."
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734502052Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.734720064Z" level=info msg="Daemon has completed initialization"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.758872412Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 17:50:37 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:37.759079256Z" level=info msg="API listen on [::]:2376"
	Jul 25 17:50:37 ha-485000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.869455528Z" level=info msg="Processing signal 'terminated'"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870413697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870828965Z" level=info msg="Daemon shutdown complete"
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870895825Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 17:50:38 ha-485000-m04 dockerd[488]: time="2024-07-25T17:50:38.870897244Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 17:50:38 ha-485000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 17:50:39 ha-485000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 17:50:39 ha-485000-m04 dockerd[1172]: time="2024-07-25T17:50:39.901687066Z" level=info msg="Starting up"
	Jul 25 17:51:40 ha-485000-m04 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 17:51:40 ha-485000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 10:51:40.881304    3774 out.go:239] * 
	W0725 10:51:40.882170    3774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:51:40.944200    3774 out.go:177] 
	
	
	==> Docker <==
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.054813784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059017705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059083409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059096144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.059214610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.069300205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.070935517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.071147226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.071268178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125259400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125463894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125566513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.125736677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267673296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267738318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267747857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:49:32 ha-485000 dockerd[1163]: time="2024-07-25T17:49:32.267820408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:50:02 ha-485000 dockerd[1157]: time="2024-07-25T17:50:02.379012546Z" level=info msg="ignoring event" container=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380251166Z" level=info msg="shim disconnected" id=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a namespace=moby
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380604445Z" level=warning msg="cleaning up after shim disconnected" id=33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a namespace=moby
	Jul 25 17:50:02 ha-485000 dockerd[1163]: time="2024-07-25T17:50:02.380651459Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136247496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136372896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.136388045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:50:27 ha-485000 dockerd[1163]: time="2024-07-25T17:50:27.138147412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5090fc1b1203b       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   b4e0e7e1de4e1       storage-provisioner
	a9a19bab6ef80       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                1                   fe4b8e6e60c09       kube-proxy-9w8bj
	09f6510c1f42c       6f1d07c71fa0f                                                                                         2 minutes ago        Running             kindnet-cni               1                   e8f3399f13f16       kindnet-bhkpj
	a31dd6f7c9844       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   9fea40676b8e3       coredns-7db6d8ff4d-dv8wr
	e4820d567662e       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   8af4e93204e34       busybox-fc5497c4f-zq4hj
	33197b74cd7a5       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   b4e0e7e1de4e1       storage-provisioner
	b1634e3371c38       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   f5370b1e66d0a       coredns-7db6d8ff4d-pnmm6
	b2f637b4a6f7e       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   2                   5a37933ef701f       kube-controller-manager-ha-485000
	c5fb9f8438921       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   791f7e270cf81       kube-vip-ha-485000
	00f3c5f6f1b53       76932a3b37d7e                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   5a37933ef701f       kube-controller-manager-ha-485000
	904a632ce7278       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      1                   aa35809cc9a2e       etcd-ha-485000
	5133be0bb8f02       3edc18e7b7672                                                                                         3 minutes ago        Running             kube-scheduler            1                   82b16c3ebba48       kube-scheduler-ha-485000
	bb09ac23fb5c5       1f6d574d502f3                                                                                         3 minutes ago        Running             kube-apiserver            1                   756d86d4401d1       kube-apiserver-ha-485000
	2fb2739ec04ab       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   bb72d6822c0fe       busybox-fc5497c4f-zq4hj
	a05e339cb9497       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   fc9af6795be55       coredns-7db6d8ff4d-pnmm6
	03e08b86c39eb       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   b09502869c625       coredns-7db6d8ff4d-dv8wr
	59bc560fbb478       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              8 minutes ago        Exited              kindnet-cni               0                   653a8381d2daf       kindnet-bhkpj
	f34917d25cfb4       55bb025d2cfa5                                                                                         8 minutes ago        Exited              kube-proxy                0                   5e7d6ddf78ead       kube-proxy-9w8bj
	37dcd3b2e16e9       3861cfcd7c04c                                                                                         8 minutes ago        Exited              etcd                      0                   dd6c687a8ef70       etcd-ha-485000
	d070da633e824       1f6d574d502f3                                                                                         8 minutes ago        Exited              kube-apiserver            0                   e3da2073892ac       kube-apiserver-ha-485000
	616226aa67d06       3edc18e7b7672                                                                                         8 minutes ago        Exited              kube-scheduler            0                   5cf9e1c76cde6       kube-scheduler-ha-485000
	
	
	==> coredns [03e08b86c39e] <==
	[INFO] 10.244.1.2:47982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126089s
	[INFO] 10.244.1.2:51014 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118313s
	[INFO] 10.244.1.2:50008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011365s
	[INFO] 10.244.1.2:32909 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082624s
	[INFO] 10.244.0.4:51190 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000474846s
	[INFO] 10.244.0.4:33582 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106891s
	[INFO] 10.244.0.4:41006 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131032s
	[INFO] 10.244.0.4:51357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075568s
	[INFO] 10.244.2.2:36960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121945s
	[INFO] 10.244.2.2:33774 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093868s
	[INFO] 10.244.1.2:53754 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143927s
	[INFO] 10.244.1.2:48688 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063165s
	[INFO] 10.244.0.4:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000211004s
	[INFO] 10.244.0.4:37456 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036138s
	[INFO] 10.244.0.4:56948 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084971s
	[INFO] 10.244.2.2:47405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154209s
	[INFO] 10.244.2.2:52677 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123121s
	[INFO] 10.244.1.2:33651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136058s
	[INFO] 10.244.1.2:35440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113476s
	[INFO] 10.244.1.2:55517 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000157954s
	[INFO] 10.244.0.4:50721 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102424s
	[INFO] 10.244.0.4:35575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108877s
	[INFO] 10.244.0.4:46484 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000032004s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a05e339cb949] <==
	[INFO] 10.244.2.2:58701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089843s
	[INFO] 10.244.2.2:45510 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123323s
	[INFO] 10.244.2.2:51099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121589s
	[INFO] 10.244.2.2:34142 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081501s
	[INFO] 10.244.2.2:53658 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114171s
	[INFO] 10.244.2.2:42295 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137781s
	[INFO] 10.244.1.2:43309 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000505919s
	[INFO] 10.244.1.2:45207 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000056892s
	[INFO] 10.244.1.2:59026 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037705s
	[INFO] 10.244.1.2:38175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000360332s
	[INFO] 10.244.0.4:34324 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059911s
	[INFO] 10.244.0.4:56363 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000234456s
	[INFO] 10.244.0.4:45101 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172739s
	[INFO] 10.244.0.4:55137 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050682s
	[INFO] 10.244.2.2:40287 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066074s
	[INFO] 10.244.2.2:34939 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040408s
	[INFO] 10.244.1.2:39138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069486s
	[INFO] 10.244.1.2:43850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103193s
	[INFO] 10.244.0.4:60304 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000059394s
	[INFO] 10.244.2.2:48042 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111442s
	[INFO] 10.244.2.2:55217 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000131122s
	[INFO] 10.244.1.2:60146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013068s
	[INFO] 10.244.0.4:54404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100673s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a31dd6f7c984] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48270 - 2612 "HINFO IN 477576417202223145.7395415910525641618. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010832063s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1813733051]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.339) (total time: 30002ms):
	Trace[1813733051]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.340)
	Trace[1813733051]: [30.002590373s] [30.002590373s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[385183908]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30002ms):
	Trace[385183908]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.342)
	Trace[385183908]: [30.002579942s] [30.002579942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1402818788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.339) (total time: 30004ms):
	Trace[1402818788]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.340)
	Trace[1402818788]: [30.004004021s] [30.004004021s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b1634e3371c3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52414 - 10418 "HINFO IN 8033717330104741663.4370907126619795494. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010811934s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[924906424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30001ms):
	Trace[924906424]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:50:02.341)
	Trace[924906424]: [30.001155479s] [30.001155479s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1854012052]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30001ms):
	Trace[1854012052]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:50:02.342)
	Trace[1854012052]: [30.001703258s] [30.001703258s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[364532444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:49:32.340) (total time: 30003ms):
	Trace[364532444]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:50:02.342)
	Trace[364532444]: [30.003093498s] [30.003093498s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-485000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T10_43_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:43:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:21 +0000   Thu, 25 Jul 2024 17:43:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-485000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 86b27484c38a4d51a4045582f517cb4e
	  System UUID:                6bc54fb5-0000-0000-b22e-c84ce3b6b1d3
	  Boot ID:                    62867363-c68f-485a-a817-570e934bdef6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zq4hj              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 coredns-7db6d8ff4d-dv8wr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m26s
	  kube-system                 coredns-7db6d8ff4d-pnmm6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m26s
	  kube-system                 etcd-ha-485000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kindnet-bhkpj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m26s
	  kube-system                 kube-apiserver-ha-485000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-controller-manager-ha-485000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-proxy-9w8bj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-scheduler-ha-485000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-vip-ha-485000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m22s                  kube-proxy       
	  Normal  Starting                 8m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m46s                  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m39s                  kubelet          Node ha-485000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet          Node ha-485000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m27s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  NodeReady                8m7s                   kubelet          Node ha-485000 status is now: NodeReady
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           5m59s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  Starting                 3m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m10s (x8 over 3m10s)  kubelet          Node ha-485000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s (x8 over 3m10s)  kubelet          Node ha-485000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s (x7 over 3m10s)  kubelet          Node ha-485000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m29s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-485000 event: Registered Node ha-485000 in Controller
	
	
	Name:               ha-485000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T10_44_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:44:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:14 +0000   Thu, 25 Jul 2024 17:44:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-485000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 074970d7a4814e46ae2a22d0bd0e4ff6
	  System UUID:                528f4ab7-0000-0000-922b-886237fb4fc4
	  Boot ID:                    20d7daee-b931-421b-a580-3b8bb7dfca58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fmpmr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 etcd-ha-485000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m27s
	  kube-system                 kindnet-mvblc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-apiserver-ha-485000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-controller-manager-ha-485000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-proxy-dc5jq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-scheduler-ha-485000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-vip-ha-485000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m24s                  kube-proxy       
	  Normal   Starting                 2m36s                  kube-proxy       
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 7m28s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m27s (x2 over 7m28s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m27s (x2 over 7m28s)  kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m27s (x2 over 7m28s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           7m11s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   NodeReady                7m9s                   kubelet          Node ha-485000-m02 status is now: NodeReady
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   NodeHasSufficientPID     4m14s                  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m14s                  kubelet          Node ha-485000-m02 has been rebooted, boot id: 9cf20695-d97a-4263-89d3-0ce013db4ad6
	  Normal   Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m14s                  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m14s                  kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m57s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m51s (x8 over 2m52s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s (x8 over 2m52s)  kubelet          Node ha-485000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s (x7 over 2m52s)  kubelet          Node ha-485000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m29s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           2m20s                  node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-485000-m02 event: Registered Node ha-485000-m02 in Controller
	
	
	Name:               ha-485000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-485000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-485000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T10_46_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-485000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:47:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 17:47:02 +0000   Thu, 25 Jul 2024 17:50:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-485000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffb85b1f8a234bafbaf23076eaf9ec4c
	  System UUID:                c11740e8-0000-0000-a691-44a5e1615e54
	  Boot ID:                    43ffead3-7e46-4ceb-8092-37bb0c6b4a3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cq6bp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m22s
	  kube-system                 kube-proxy-mvbkh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node ha-485000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node ha-485000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node ha-485000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  NodeReady                5m                     kubelet          Node ha-485000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           2m29s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-485000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           96s                    node-controller  Node ha-485000-m04 event: Registered Node ha-485000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035231] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008075] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691281] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007298] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796480] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.252745] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.283283] systemd-fstab-generator[464]: Ignoring "noauto" option for root device
	[  +0.095770] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +1.941045] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +0.252394] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.099578] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.103399] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.053926] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.386735] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.102643] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.098759] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.136380] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.447555] systemd-fstab-generator[1564]: Ignoring "noauto" option for root device
	[  +6.725237] kauditd_printk_skb: 168 callbacks suppressed
	[Jul25 17:49] kauditd_printk_skb: 40 callbacks suppressed
	[Jul25 17:50] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [37dcd3b2e16e] <==
	{"level":"info","ts":"2024-07-25T17:48:18.115397Z","caller":"traceutil/trace.go:171","msg":"trace[271686631] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"3.705255846s","start":"2024-07-25T17:48:14.410139Z","end":"2024-07-25T17:48:18.115395Z","steps":["trace[271686631] 'agreement among raft nodes before linearized reading'  (duration: 3.705244532s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:48:18.115407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:48:14.410131Z","time spent":"3.705272729s","remote":"127.0.0.1:55858","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/07/25 17:48:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:48:18.115454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:48:13.072941Z","time spent":"5.042510732s","remote":"127.0.0.1:55972","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/25 17:48:18 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:48:18.153243Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:48:18.15329Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T17:48:18.154957Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-25T17:48:18.155169Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155183Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155198Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155281Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155307Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155328Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.155336Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b2c6eed6edf2c446"}
	{"level":"info","ts":"2024-07-25T17:48:18.15534Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155357Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155612Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155639Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155686Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.155716Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:48:18.158712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-25T17:48:18.15883Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-25T17:48:18.158858Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-485000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [904a632ce727] <==
	{"level":"info","ts":"2024-07-25T17:50:02.973504Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:02.983255Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"513c51a7eb0a4980","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-25T17:50:02.983303Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:02.983517Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:50:03.001392Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"513c51a7eb0a4980","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-25T17:50:03.001456Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.040247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(12882246391022404678 13314548521573537860)"}
	{"level":"info","ts":"2024-07-25T17:51:49.041836Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"513c51a7eb0a4980","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"warn","ts":"2024-07-25T17:51:49.041994Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"b8c6c7563d17d844","removed-member-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.042113Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-07-25T17:51:49.041918Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.04254Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.042771Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.04327Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.04386Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.044005Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.044284Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980","error":"context canceled"}
	{"level":"warn","ts":"2024-07-25T17:51:49.044339Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"513c51a7eb0a4980","error":"failed to read 513c51a7eb0a4980 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-25T17:51:49.044366Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.044649Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980","error":"context canceled"}
	{"level":"info","ts":"2024-07-25T17:51:49.044756Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.044892Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"513c51a7eb0a4980"}
	{"level":"info","ts":"2024-07-25T17:51:49.045007Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.059223Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"513c51a7eb0a4980"}
	{"level":"warn","ts":"2024-07-25T17:51:49.067587Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"513c51a7eb0a4980"}
	
	
	==> kernel <==
	 17:51:55 up 3 min,  0 users,  load average: 0.40, 0.36, 0.15
	Linux ha-485000 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [09f6510c1f42] <==
	I0725 17:51:23.232718       1 main.go:299] handling current node
	I0725 17:51:23.232727       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:23.232731       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:33.224521       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:33.224559       1 main.go:299] handling current node
	I0725 17:51:33.224570       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:33.224575       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:33.224831       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:51:33.224860       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:33.224964       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:33.224994       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:43.233605       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:43.233640       1 main.go:299] handling current node
	I0725 17:51:43.233682       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:43.233718       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:43.233797       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:51:43.233805       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:43.233846       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:43.233852       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:53.226004       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:51:53.226047       1 main.go:299] handling current node
	I0725 17:51:53.226057       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:51:53.226062       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:53.226381       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:51:53.226411       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [59bc560fbb47] <==
	I0725 17:47:42.711271       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:47:52.709603       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:47:52.709679       1 main.go:299] handling current node
	I0725 17:47:52.709701       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:47:52.709715       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:47:52.709796       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:47:52.709836       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:47:52.709917       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:47:52.709960       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:48:02.715917       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:48:02.716078       1 main.go:299] handling current node
	I0725 17:48:02.716218       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:48:02.716354       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:48:02.716553       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:48:02.716710       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:48:02.716982       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:48:02.717071       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	I0725 17:48:12.708500       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0725 17:48:12.708695       1 main.go:299] handling current node
	I0725 17:48:12.709027       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0725 17:48:12.709144       1 main.go:322] Node ha-485000-m02 has CIDR [10.244.1.0/24] 
	I0725 17:48:12.709384       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0725 17:48:12.709480       1 main.go:322] Node ha-485000-m03 has CIDR [10.244.2.0/24] 
	I0725 17:48:12.709756       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0725 17:48:12.709925       1 main.go:322] Node ha-485000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bb09ac23fb5c] <==
	I0725 17:49:12.630251       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 17:49:12.630340       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 17:49:12.630392       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 17:49:12.632677       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:49:12.632839       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:49:12.671271       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0725 17:49:12.671437       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0725 17:49:12.719211       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:49:12.719996       1 policy_source.go:224] refreshing policies
	I0725 17:49:12.723042       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 17:49:12.727001       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 17:49:12.739521       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0725 17:49:12.745300       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 17:49:12.754593       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 17:49:12.755092       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 17:49:12.755122       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 17:49:12.755236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0725 17:49:12.769857       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 17:49:12.771521       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 17:49:12.771589       1 aggregator.go:165] initial CRD sync complete...
	I0725 17:49:12.771606       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 17:49:12.771672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 17:49:12.771740       1 cache.go:39] Caches are synced for autoregister controller
	I0725 17:49:12.810193       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 17:49:13.634621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [d070da633e82] <==
	W0725 17:48:18.133515       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.133533       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.133551       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.135633       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549ea8)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0725 17:48:18.135736       1 trace.go:236] Trace[227145877]: "Get" accept:application/json, */*,audit-id:3ee97e8c-a90b-49a9-bd2d-1daff0338a90,client:192.169.0.5,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (25-Jul-2024 17:48:11.626) (total time: 6509ms):
	Trace[227145877]: [6.509602801s] [6.509602801s] END
	E0725 17:48:18.136018       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549f18)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0725 17:48:18.136121       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c549f28)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0725 17:48:18.136277       1 timeout.go:142] post-timeout activity - time-elapsed: 109.945637ms, GET "/readyz" result: <nil>
	I0725 17:48:18.136698       1 trace.go:236] Trace[964619862]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.169.0.5,type:*v1.Endpoints,resource:apiServerIPInfo (25-Jul-2024 17:48:13.107) (total time: 5029ms):
	Trace[964619862]: [5.029134952s] [5.029134952s] END
	I0725 17:48:18.137088       1 trace.go:236] Trace[1328638299]: "Update" accept:application/json, */*,audit-id:845214e4-f445-4918-9288-423a1ea3f222,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (25-Jul-2024 17:48:13.071) (total time: 5065ms):
	Trace[1328638299]: ["GuaranteedUpdate etcd3" audit-id:845214e4-f445-4918-9288-423a1ea3f222,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 5065ms (17:48:13.072)
	Trace[1328638299]:  ---"Txn call failed" err:rpc error: code = Unknown desc = malformed header: missing HTTP content-type 5063ms (17:48:18.136)]
	Trace[1328638299]: [5.065234103s] [5.065234103s] END
	W0725 17:48:18.137275       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.137331       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.139661       1 controller.go:159] unable to sync kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0725 17:48:18.140698       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140768       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140810       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140838       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 17:48:18.140862       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 17:48:18.140922       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	I0725 17:48:18.190627       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [00f3c5f6f1b5] <==
	I0725 17:48:52.193509       1 serving.go:380] Generated self-signed cert in-memory
	I0725 17:48:52.581794       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0725 17:48:52.581857       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:48:52.584559       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0725 17:48:52.585376       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:48:52.585632       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:48:52.585875       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0725 17:49:12.699450       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [b2f637b4a6f7] <==
	I0725 17:50:02.506924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.822µs"
	I0725 17:50:04.767363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.571098ms"
	I0725 17:50:04.767662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.416µs"
	I0725 17:50:11.453829       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fh5q9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fh5q9\": the object has been modified; please apply your changes to the latest version and try again"
	I0725 17:50:11.456664       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bfa34ec2-d695-424c-89b9-e3c5a5edb82f", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fh5q9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fh5q9": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:50:11.463104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.202966ms"
	E0725 17:50:11.463156       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:50:11.463494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="313.676µs"
	I0725 17:50:11.468346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.454µs"
	I0725 17:51:45.850600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.96054ms"
	I0725 17:51:45.899991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.357961ms"
	E0725 17:51:45.900032       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:51:45.931566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.507927ms"
	E0725 17:51:45.931659       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:51:45.931778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.538µs"
	I0725 17:51:45.942852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.581µs"
	I0725 17:51:47.951056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.623µs"
	I0725 17:51:48.517649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.926µs"
	I0725 17:51:48.523119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.996µs"
	I0725 17:51:48.525633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.05µs"
	E0725 17:51:55.040503       1 gc_controller.go:153] "Failed to get node" err="node \"ha-485000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-485000-m03"
	E0725 17:51:55.040579       1 gc_controller.go:153] "Failed to get node" err="node \"ha-485000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-485000-m03"
	E0725 17:51:55.040597       1 gc_controller.go:153] "Failed to get node" err="node \"ha-485000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-485000-m03"
	E0725 17:51:55.040611       1 gc_controller.go:153] "Failed to get node" err="node \"ha-485000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-485000-m03"
	E0725 17:51:55.040622       1 gc_controller.go:153] "Failed to get node" err="node \"ha-485000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-485000-m03"
	
	
	==> kube-proxy [a9a19bab6ef8] <==
	I0725 17:49:32.621289       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:49:32.645667       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0725 17:49:32.691573       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:49:32.691749       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:49:32.691811       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:49:32.695182       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:49:32.696010       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:49:32.696165       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:49:32.700031       1 config.go:192] "Starting service config controller"
	I0725 17:49:32.700126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:49:32.700303       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:49:32.700360       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:49:32.702446       1 config.go:319] "Starting node config controller"
	I0725 17:49:32.702527       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:49:32.801433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:49:32.801644       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:49:32.803416       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f34917d25cfb] <==
	I0725 17:43:28.583459       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:43:28.589732       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0725 17:43:28.631529       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:43:28.631610       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:43:28.631661       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:43:28.635368       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:43:28.635648       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:43:28.635902       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:43:28.637590       1 config.go:192] "Starting service config controller"
	I0725 17:43:28.637644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:43:28.637672       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:43:28.637684       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:43:28.638240       1 config.go:319] "Starting node config controller"
	I0725 17:43:28.639127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:43:28.738536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:43:28.738551       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:43:28.739452       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5133be0bb8f0] <==
	I0725 17:48:51.828281       1 serving.go:380] Generated self-signed cert in-memory
	W0725 17:49:02.461664       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0725 17:49:02.461710       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 17:49:02.461716       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 17:49:12.469101       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 17:49:12.469301       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:49:12.471867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 17:49:12.471972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 17:49:12.473141       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 17:49:12.471987       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 17:49:12.773323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [616226aa67d0] <==
	E0725 17:46:05.510251       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-fmpmr\": pod busybox-fc5497c4f-fmpmr is already assigned to node \"ha-485000-m02\"" pod="default/busybox-fc5497c4f-fmpmr"
	I0725 17:46:05.510297       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-fmpmr" node="ha-485000-m02"
	E0725 17:46:32.247841       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d4ljg\": pod kindnet-d4ljg is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-d4ljg" node="ha-485000-m04"
	E0725 17:46:32.247918       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc0c4eed-b4e4-414b-95cd-1808497c1030(kube-system/kindnet-d4ljg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d4ljg"
	E0725 17:46:32.247933       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d4ljg\": pod kindnet-d4ljg is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-d4ljg"
	I0725 17:46:32.247946       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d4ljg" node="ha-485000-m04"
	E0725 17:46:32.260257       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mvbkh\": pod kube-proxy-mvbkh is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mvbkh" node="ha-485000-m04"
	E0725 17:46:32.260311       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 30908c7f-171c-4d89-8f93-69e304ca0ef0(kube-system/kube-proxy-mvbkh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mvbkh"
	E0725 17:46:32.260323       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mvbkh\": pod kube-proxy-mvbkh is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-mvbkh"
	E0725 17:46:32.260438       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzpdg\": pod kube-proxy-pzpdg is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzpdg" node="ha-485000-m04"
	E0725 17:46:32.260530       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzpdg\": pod kube-proxy-pzpdg is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-pzpdg"
	E0725 17:46:32.261298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h45p5\": pod kindnet-h45p5 is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-h45p5" node="ha-485000-m04"
	E0725 17:46:32.261340       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dc5552bb-a604-499f-b14a-fa1a01d2c56b(kube-system/kindnet-h45p5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-h45p5"
	E0725 17:46:32.261350       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h45p5\": pod kindnet-h45p5 is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-h45p5"
	I0725 17:46:32.261359       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h45p5" node="ha-485000-m04"
	I0725 17:46:32.261423       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mvbkh" node="ha-485000-m04"
	E0725 17:46:32.293379       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pndm8\": pod kube-proxy-pndm8 is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pndm8" node="ha-485000-m04"
	E0725 17:46:32.293429       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf54e0c2-73ed-433e-abf5-a9b0d7effd3a(kube-system/kube-proxy-pndm8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pndm8"
	E0725 17:46:32.293442       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pndm8\": pod kube-proxy-pndm8 is already assigned to node \"ha-485000-m04\"" pod="kube-system/kube-proxy-pndm8"
	I0725 17:46:32.293453       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pndm8" node="ha-485000-m04"
	E0725 17:46:32.295248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cq6bp\": pod kindnet-cq6bp is already assigned to node \"ha-485000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cq6bp" node="ha-485000-m04"
	E0725 17:46:32.295438       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f2a36380-2980-4faf-b814-d1ec1200ce84(kube-system/kindnet-cq6bp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cq6bp"
	E0725 17:46:32.295540       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cq6bp\": pod kindnet-cq6bp is already assigned to node \"ha-485000-m04\"" pod="kube-system/kindnet-cq6bp"
	I0725 17:46:32.296219       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cq6bp" node="ha-485000-m04"
	E0725 17:48:18.187370       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.157513    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5370b1e66d0a1bab51d0ae170c1d6f725e46c5b56e817bf301e0de380ab2f12"
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.168285    1571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4b8e6e60c09771c510e3819e54d688f98d8acf64f71de6c8e6f426c4debe86"
	Jul 25 17:49:32 ha-485000 kubelet[1571]: I0725 17:49:32.168399    1571 scope.go:117] "RemoveContainer" containerID="f34917d25cfb4ba024d1a1cf5effbc6f97bcf2e268abc6bf2361f8c9c5eeb010"
	Jul 25 17:49:44 ha-485000 kubelet[1571]: I0725 17:49:44.115564    1571 scope.go:117] "RemoveContainer" containerID="2fcad10513df422ac171e54526b00f2c84f55db282008fc856542ab865eb1c73"
	Jul 25 17:49:44 ha-485000 kubelet[1571]: E0725 17:49:44.128206    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:49:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:49:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: I0725 17:50:02.559839    1571 scope.go:117] "RemoveContainer" containerID="f17c37c2a75729e7739bb6f9c964dcacb64cebbab8024774ad9941e632f91c1a"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: I0725 17:50:02.560088    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:02 ha-485000 kubelet[1571]: E0725 17:50:02.560196    1571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9d25ef1d-567f-4da3-a62e-16958da26713)\"" pod="kube-system/storage-provisioner" podUID="9d25ef1d-567f-4da3-a62e-16958da26713"
	Jul 25 17:50:15 ha-485000 kubelet[1571]: I0725 17:50:15.087487    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:15 ha-485000 kubelet[1571]: E0725 17:50:15.087667    1571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9d25ef1d-567f-4da3-a62e-16958da26713)\"" pod="kube-system/storage-provisioner" podUID="9d25ef1d-567f-4da3-a62e-16958da26713"
	Jul 25 17:50:27 ha-485000 kubelet[1571]: I0725 17:50:27.087222    1571 scope.go:117] "RemoveContainer" containerID="33197b74cd7a5d006a319a4bbf5187a61b851ef71753ac74b3c5104b318eb35a"
	Jul 25 17:50:44 ha-485000 kubelet[1571]: E0725 17:50:44.123421    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:50:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:50:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:51:44 ha-485000 kubelet[1571]: E0725 17:51:44.123531    1571 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:51:44 ha-485000 kubelet[1571]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:51:44 ha-485000 kubelet[1571]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-485000 -n ha-485000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-485000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-b2j65
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-485000 describe pod busybox-fc5497c4f-b2j65
helpers_test.go:282: (dbg) kubectl --context ha-485000 describe pod busybox-fc5497c4f-b2j65:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-b2j65
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xsn7h (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-xsn7h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  11s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s                 default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s (x2 over 12s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s (x2 over 12s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (12.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-348000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0725 11:01:24.591843    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-348000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.67745062s)

                                                
                                                
-- stdout --
	* [mount-start-1-348000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-348000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-348000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:bd:3c:82:48:1c
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-348000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:17:90:a4:9c:6e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:17:90:a4:9c:6e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-348000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-348000 -n mount-start-1-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-348000 -n mount-start-1-348000: exit status 7 (77.547764ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:01:59.430330    4579 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:01:59.430353    4579 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-348000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.76s)

                                                
                                    
x
+
TestScheduledStopUnix (141.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-406000 --memory=2048 --driver=hyperkit 
E0725 11:16:24.609954    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:17:20.201251    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-406000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.539056599s)

                                                
                                                
-- stdout --
	* [scheduled-stop-406000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-406000" primary control-plane node in "scheduled-stop-406000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-406000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:4:27:df:ca:80
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-406000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:1f:17:e9:b5:88
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:1f:17:e9:b5:88
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-406000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-406000" primary control-plane node in "scheduled-stop-406000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-406000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:4:27:df:ca:80
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-406000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:1f:17:e9:b5:88
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1a:1f:17:e9:b5:88
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-25 11:18:23.09449 -0700 PDT m=+3020.181832645
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-406000 -n scheduled-stop-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-406000 -n scheduled-stop-406000: exit status 7 (76.952364ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:18:23.169777    5778 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:18:23.169799    5778 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-406000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-406000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-406000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-406000: (5.24983145s)
--- FAIL: TestScheduledStopUnix (141.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (734.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-071000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-071000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.126212711s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-071000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-071000: (8.381879077s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-071000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-071000 status --format={{.Host}}: exit status 7 (66.227783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-071000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0725 11:36:07.720511    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:36:24.663192    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:37:20.256869    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:40:04.850869    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:41:24.671086    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:41:27.914232    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:42:20.263154    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:43:43.322831    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:45:04.858579    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:46:24.676426    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-071000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (11m9.581552957s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-071000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "kubernetes-upgrade-071000" primary control-plane node in "kubernetes-upgrade-071000" cluster
	* Restarting existing hyperkit VM for "kubernetes-upgrade-071000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:35:33.278168    7283 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:35:33.278437    7283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:35:33.278442    7283 out.go:304] Setting ErrFile to fd 2...
	I0725 11:35:33.278446    7283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:35:33.278627    7283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:35:33.280005    7283 out.go:298] Setting JSON to false
	I0725 11:35:33.302018    7283 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5703,"bootTime":1721926830,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 11:35:33.302113    7283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:35:33.323779    7283 out.go:177] * [kubernetes-upgrade-071000] minikube v1.33.1 on Darwin 14.5
	I0725 11:35:33.365393    7283 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:35:33.365426    7283 notify.go:220] Checking for updates...
	I0725 11:35:33.407298    7283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 11:35:33.428421    7283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 11:35:33.449431    7283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:35:33.470240    7283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 11:35:33.491372    7283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:35:33.512653    7283 config.go:182] Loaded profile config "kubernetes-upgrade-071000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0725 11:35:33.512992    7283 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:35:33.513045    7283 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:35:33.521824    7283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54137
	I0725 11:35:33.522166    7283 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:35:33.522602    7283 main.go:141] libmachine: Using API Version  1
	I0725 11:35:33.522619    7283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:35:33.522867    7283 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:35:33.522992    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:35:33.523189    7283 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:35:33.523439    7283 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:35:33.523462    7283 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:35:33.531769    7283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54139
	I0725 11:35:33.532118    7283 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:35:33.532456    7283 main.go:141] libmachine: Using API Version  1
	I0725 11:35:33.532467    7283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:35:33.532677    7283 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:35:33.532799    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:35:33.561429    7283 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 11:35:33.603326    7283 start.go:297] selected driver: hyperkit
	I0725 11:35:33.603337    7283 start.go:901] validating driver "hyperkit" against &{Name:kubernetes-upgrade-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-071000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:35:33.603423    7283 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:35:33.606216    7283 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:35:33.606316    7283 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 11:35:33.614703    7283 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 11:35:33.618659    7283 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:35:33.618680    7283 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 11:35:33.618801    7283 cni.go:84] Creating CNI manager for ""
	I0725 11:35:33.618815    7283 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:35:33.618859    7283 start.go:340] cluster config:
	{Name:kubernetes-upgrade-071000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-071000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:35:33.618943    7283 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:35:33.661231    7283 out.go:177] * Starting "kubernetes-upgrade-071000" primary control-plane node in "kubernetes-upgrade-071000" cluster
	I0725 11:35:33.682418    7283 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:35:33.682451    7283 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0725 11:35:33.682464    7283 cache.go:56] Caching tarball of preloaded images
	I0725 11:35:33.682563    7283 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 11:35:33.682571    7283 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 11:35:33.682666    7283 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubernetes-upgrade-071000/config.json ...
	I0725 11:35:33.683081    7283 start.go:360] acquireMachinesLock for kubernetes-upgrade-071000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:45:25.583446    7283 start.go:364] duration metric: took 9m51.888853075s to acquireMachinesLock for "kubernetes-upgrade-071000"
	I0725 11:45:25.583501    7283 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:45:25.583515    7283 fix.go:54] fixHost starting: 
	I0725 11:45:25.583797    7283 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:45:25.583816    7283 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:45:25.592647    7283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54309
	I0725 11:45:25.592988    7283 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:45:25.593318    7283 main.go:141] libmachine: Using API Version  1
	I0725 11:45:25.593335    7283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:45:25.593537    7283 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:45:25.593661    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:25.593764    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetState
	I0725 11:45:25.593857    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:45:25.593953    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | hyperkit pid from json: 7177
	I0725 11:45:25.594866    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | hyperkit pid 7177 missing from process table
	I0725 11:45:25.594909    7283 fix.go:112] recreateIfNeeded on kubernetes-upgrade-071000: state=Stopped err=<nil>
	I0725 11:45:25.594924    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	W0725 11:45:25.595018    7283 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:45:25.647610    7283 out.go:177] * Restarting existing hyperkit VM for "kubernetes-upgrade-071000" ...
	I0725 11:45:25.689682    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .Start
	I0725 11:45:25.689779    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:45:25.689859    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/hyperkit.pid
	I0725 11:45:25.690741    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | hyperkit pid 7177 missing from process table
	I0725 11:45:25.690754    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | pid 7177 is in state "Stopped"
	I0725 11:45:25.690767    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/hyperkit.pid...
	I0725 11:45:25.690956    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Using UUID fce504db-5602-4b8e-864c-cee8c375dfdc
	I0725 11:45:25.721850    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Generated MAC 92:a3:2a:d3:ff:2b
	I0725 11:45:25.721874    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-071000
	I0725 11:45:25.722057    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fce504db-5602-4b8e-864c-cee8c375dfdc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aab40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:45:25.722114    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fce504db-5602-4b8e-864c-cee8c375dfdc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aab40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 11:45:25.722184    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fce504db-5602-4b8e-864c-cee8c375dfdc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/kubernetes-upgrade-071000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ku
bernetes-upgrade-071000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-071000"}
	I0725 11:45:25.722243    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fce504db-5602-4b8e-864c-cee8c375dfdc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/kubernetes-upgrade-071000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/bzimage,/Users/jenkins/minikube-integr
ation/19326-1195/.minikube/machines/kubernetes-upgrade-071000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-071000"
	I0725 11:45:25.722263    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 11:45:25.724131    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 DEBUG: hyperkit: Pid is 7813
	I0725 11:45:25.724660    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Attempt 0
	I0725 11:45:25.724686    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:45:25.724820    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | hyperkit pid from json: 7813
	I0725 11:45:25.727308    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Searching for 92:a3:2a:d3:ff:2b in /var/db/dhcpd_leases ...
	I0725 11:45:25.727421    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0725 11:45:25.727475    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3ec43}
	I0725 11:45:25.727544    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | Found match: 92:a3:2a:d3:ff:2b
	I0725 11:45:25.727564    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | IP: 192.169.0.21
	I0725 11:45:25.727712    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetConfigRaw
	I0725 11:45:25.728604    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetIP
	I0725 11:45:25.728806    7283 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubernetes-upgrade-071000/config.json ...
	I0725 11:45:25.729409    7283 machine.go:94] provisionDockerMachine start ...
	I0725 11:45:25.729430    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:25.729607    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:25.729724    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:25.729838    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:25.729946    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:25.730048    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:25.730203    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:25.730428    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:25.730438    7283 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 11:45:25.733449    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 11:45:25.742801    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 11:45:25.744071    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:45:25.744117    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:45:25.744144    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:45:25.744156    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:45:26.133442    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 11:45:26.133455    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 11:45:26.248066    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 11:45:26.248088    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 11:45:26.248105    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 11:45:26.248122    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 11:45:26.248969    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 11:45:26.248982    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 11:45:31.829743    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 11:45:31.829864    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 11:45:31.829878    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 11:45:31.853910    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) DBG | 2024/07/25 11:45:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 11:45:35.632864    7283 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.21:22: connect: connection refused
	I0725 11:45:38.693745    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 11:45:38.693760    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetMachineName
	I0725 11:45:38.693899    7283 buildroot.go:166] provisioning hostname "kubernetes-upgrade-071000"
	I0725 11:45:38.693911    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetMachineName
	I0725 11:45:38.694019    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:38.694111    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:38.694200    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.694290    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.694406    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:38.694534    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:38.694679    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:38.694688    7283 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-071000 && echo "kubernetes-upgrade-071000" | sudo tee /etc/hostname
	I0725 11:45:38.764132    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-071000
	
	I0725 11:45:38.764150    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:38.764291    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:38.764374    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.764490    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.764595    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:38.764732    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:38.764874    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:38.764890    7283 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-071000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-071000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-071000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 11:45:38.831121    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 11:45:38.831143    7283 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 11:45:38.831156    7283 buildroot.go:174] setting up certificates
	I0725 11:45:38.831165    7283 provision.go:84] configureAuth start
	I0725 11:45:38.831172    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetMachineName
	I0725 11:45:38.831346    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetIP
	I0725 11:45:38.831439    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:38.831520    7283 provision.go:143] copyHostCerts
	I0725 11:45:38.831604    7283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 11:45:38.831614    7283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 11:45:38.831907    7283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 11:45:38.832158    7283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 11:45:38.832165    7283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 11:45:38.832255    7283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 11:45:38.832438    7283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 11:45:38.832444    7283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 11:45:38.832534    7283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 11:45:38.832694    7283 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-071000 san=[127.0.0.1 192.169.0.21 kubernetes-upgrade-071000 localhost minikube]
	I0725 11:45:38.874818    7283 provision.go:177] copyRemoteCerts
	I0725 11:45:38.874873    7283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 11:45:38.874890    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:38.875012    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:38.875104    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.875204    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:38.875295    7283 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/id_rsa Username:docker}
	I0725 11:45:38.912941    7283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 11:45:38.932304    7283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 11:45:38.951723    7283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 11:45:38.971142    7283 provision.go:87] duration metric: took 139.958323ms to configureAuth
	I0725 11:45:38.971156    7283 buildroot.go:189] setting minikube options for container-runtime
	I0725 11:45:38.971287    7283 config.go:182] Loaded profile config "kubernetes-upgrade-071000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0725 11:45:38.971304    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:38.971452    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:38.971551    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:38.971640    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.971743    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:38.971844    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:38.971968    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:38.972098    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:38.972106    7283 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 11:45:39.031498    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 11:45:39.031521    7283 buildroot.go:70] root file system type: tmpfs
	I0725 11:45:39.031614    7283 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 11:45:39.031630    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:39.031779    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:39.031872    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:39.031987    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:39.032080    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:39.032210    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:39.032353    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:39.032395    7283 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 11:45:39.102987    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 11:45:39.103009    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:39.103151    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:39.103253    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:39.103347    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:39.103436    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:39.103558    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:39.103701    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:39.103713    7283 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 11:45:40.776014    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 11:45:40.776028    7283 machine.go:97] duration metric: took 15.046314586s to provisionDockerMachine
	I0725 11:45:40.776041    7283 start.go:293] postStartSetup for "kubernetes-upgrade-071000" (driver="hyperkit")
	I0725 11:45:40.776053    7283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 11:45:40.776063    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:40.776301    7283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 11:45:40.776318    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:40.776419    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:40.776521    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:40.776624    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:40.776726    7283 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/id_rsa Username:docker}
	I0725 11:45:40.817591    7283 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 11:45:40.821097    7283 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 11:45:40.821112    7283 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 11:45:40.821228    7283 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 11:45:40.821444    7283 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 11:45:40.821658    7283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 11:45:40.830903    7283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 11:45:40.864104    7283 start.go:296] duration metric: took 88.051817ms for postStartSetup
	I0725 11:45:40.864129    7283 fix.go:56] duration metric: took 15.280322945s for fixHost
	I0725 11:45:40.864142    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:40.864272    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:40.864366    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:40.864453    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:40.864529    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:40.864645    7283 main.go:141] libmachine: Using SSH client type: native
	I0725 11:45:40.864785    7283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa65d0c0] 0xa65fe20 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0725 11:45:40.864793    7283 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 11:45:40.923716    7283 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933140.881924930
	
	I0725 11:45:40.923727    7283 fix.go:216] guest clock: 1721933140.881924930
	I0725 11:45:40.923737    7283 fix.go:229] Guest: 2024-07-25 11:45:40.88192493 -0700 PDT Remote: 2024-07-25 11:45:40.864132 -0700 PDT m=+607.609246195 (delta=17.79293ms)
	I0725 11:45:40.923760    7283 fix.go:200] guest clock delta is within tolerance: 17.79293ms
	I0725 11:45:40.923770    7283 start.go:83] releasing machines lock for "kubernetes-upgrade-071000", held for 15.339997477s
	I0725 11:45:40.923789    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:40.923949    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetIP
	I0725 11:45:40.924041    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:40.924403    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:40.924523    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .DriverName
	I0725 11:45:40.924594    7283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 11:45:40.924627    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:40.924667    7283 ssh_runner.go:195] Run: cat /version.json
	I0725 11:45:40.924684    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHHostname
	I0725 11:45:40.924726    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:40.924811    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:40.924820    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHPort
	I0725 11:45:40.924956    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:40.924968    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHKeyPath
	I0725 11:45:40.925081    7283 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/id_rsa Username:docker}
	I0725 11:45:40.925096    7283 main.go:141] libmachine: (kubernetes-upgrade-071000) Calling .GetSSHUsername
	I0725 11:45:40.925186    7283 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/kubernetes-upgrade-071000/id_rsa Username:docker}
	I0725 11:45:40.957100    7283 ssh_runner.go:195] Run: systemctl --version
	I0725 11:45:41.008368    7283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 11:45:41.012776    7283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 11:45:41.012831    7283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0725 11:45:41.020775    7283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0725 11:45:41.033875    7283 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 11:45:41.033888    7283 start.go:495] detecting cgroup driver to use...
	I0725 11:45:41.033994    7283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:45:41.058585    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0725 11:45:41.067648    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 11:45:41.076738    7283 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 11:45:41.076788    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 11:45:41.085657    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:45:41.094392    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 11:45:41.103201    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:45:41.112076    7283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 11:45:41.121107    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 11:45:41.130009    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 11:45:41.138735    7283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 11:45:41.147477    7283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 11:45:41.155374    7283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 11:45:41.163464    7283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:45:41.261919    7283 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 11:45:41.276832    7283 start.go:495] detecting cgroup driver to use...
	I0725 11:45:41.276908    7283 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 11:45:41.288152    7283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:45:41.299003    7283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 11:45:41.328889    7283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:45:41.340364    7283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:45:41.350648    7283 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 11:45:41.372732    7283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:45:41.383217    7283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:45:41.400062    7283 ssh_runner.go:195] Run: which cri-dockerd
	I0725 11:45:41.403078    7283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 11:45:41.410179    7283 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0725 11:45:41.423613    7283 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 11:45:41.515055    7283 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 11:45:41.633510    7283 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 11:45:41.633578    7283 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 11:45:41.647747    7283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:45:41.736215    7283 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:46:42.657429    7283 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.920015405s)
	I0725 11:46:42.657509    7283 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 11:46:42.693470    7283 out.go:177] 
	W0725 11:46:42.714268    7283 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 18:45:39 kubernetes-upgrade-071000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.339019551Z" level=info msg="Starting up"
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.339462323Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.340019426Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=507
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.358936735Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.373973415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374094338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374165265Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374202388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374257710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374297352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374428189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374468126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374499143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374528130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374583590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374697959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376275641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376332261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376464583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376507190Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376554173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376594356Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377599762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377691700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377734132Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377831359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377877240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377942319Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378186253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378299183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378341130Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378371869Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378403083Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378433527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378470709Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378507989Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378545940Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378579851Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378609947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378639597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378675348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378717513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378751528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378782131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378815382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378851855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378884777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378914550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378974378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379029338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379063049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379093796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379123495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379154886Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379189736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379221222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379251039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379318248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379360837Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379571503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379612087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379641904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379671220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379702252Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379911063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380039112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380099200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380143655Z" level=info msg="containerd successfully booted in 0.021829s"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.365855600Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.378809875Z" level=info msg="Loading containers: start."
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.496225042Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.557766153Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.602687324Z" level=warning msg="error locating sandbox id b26ccd2477619150495cce496dd17d01bfd4b87331266501a260397b8ef6ca40: sandbox b26ccd2477619150495cce496dd17d01bfd4b87331266501a260397b8ef6ca40 not found"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.602835463Z" level=info msg="Loading containers: done."
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.637221815Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.637298306Z" level=info msg="Daemon has completed initialization"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.729055211Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 18:45:40 kubernetes-upgrade-071000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.729120001Z" level=info msg="API listen on [::]:2376"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.721781361Z" level=info msg="Processing signal 'terminated'"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.722833932Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.722972331Z" level=info msg="Daemon shutdown complete"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.723016950Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.723029436Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 18:45:41 kubernetes-upgrade-071000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 18:45:42 kubernetes-upgrade-071000 dockerd[1009]: time="2024-07-25T18:45:42.765289218Z" level=info msg="Starting up"
	Jul 25 18:46:42 kubernetes-upgrade-071000 dockerd[1009]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 18:45:39 kubernetes-upgrade-071000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.339019551Z" level=info msg="Starting up"
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.339462323Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:39.340019426Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=507
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.358936735Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.373973415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374094338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374165265Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374202388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374257710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374297352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374428189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374468126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374499143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374528130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374583590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.374697959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376275641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376332261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376464583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376507190Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376554173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.376594356Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377599762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377691700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377734132Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377831359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377877240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.377942319Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378186253Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378299183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378341130Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378371869Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378403083Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378433527Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378470709Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378507989Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378545940Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378579851Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378609947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378639597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378675348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378717513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378751528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378782131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378815382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378851855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378884777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378914550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.378974378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379029338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379063049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379093796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379123495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379154886Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379189736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379221222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379251039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379318248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379360837Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379571503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379612087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379641904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379671220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379702252Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.379911063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380039112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380099200Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 18:45:39 kubernetes-upgrade-071000 dockerd[507]: time="2024-07-25T18:45:39.380143655Z" level=info msg="containerd successfully booted in 0.021829s"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.365855600Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.378809875Z" level=info msg="Loading containers: start."
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.496225042Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.557766153Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.602687324Z" level=warning msg="error locating sandbox id b26ccd2477619150495cce496dd17d01bfd4b87331266501a260397b8ef6ca40: sandbox b26ccd2477619150495cce496dd17d01bfd4b87331266501a260397b8ef6ca40 not found"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.602835463Z" level=info msg="Loading containers: done."
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.637221815Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.637298306Z" level=info msg="Daemon has completed initialization"
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.729055211Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 18:45:40 kubernetes-upgrade-071000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 18:45:40 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:40.729120001Z" level=info msg="API listen on [::]:2376"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.721781361Z" level=info msg="Processing signal 'terminated'"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.722833932Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.722972331Z" level=info msg="Daemon shutdown complete"
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.723016950Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 18:45:41 kubernetes-upgrade-071000 dockerd[500]: time="2024-07-25T18:45:41.723029436Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 18:45:41 kubernetes-upgrade-071000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 18:45:42 kubernetes-upgrade-071000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 18:45:42 kubernetes-upgrade-071000 dockerd[1009]: time="2024-07-25T18:45:42.765289218Z" level=info msg="Starting up"
	Jul 25 18:46:42 kubernetes-upgrade-071000 dockerd[1009]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 18:46:42 kubernetes-upgrade-071000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 11:46:42.714348    7283 out.go:239] * 
	* 
	W0725 11:46:42.715151    7283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:46:42.798297    7283 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-amd64 start -p kubernetes-upgrade-071000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit  : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-071000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-071000 version --output=json: exit status 1 (38.998551ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-071000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-25 11:46:42.871293 -0700 PDT m=+4719.893008406
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-071000 -n kubernetes-upgrade-071000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-071000 -n kubernetes-upgrade-071000: exit status 6 (146.973738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:46:43.005783    7867 status.go:417] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-071000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-071000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-071000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-071000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-071000: (5.250183589s)
--- FAIL: TestKubernetesUpgrade (734.60s)

                                                
                                    
x
+
TestPause/serial/Start (146.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-811000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-811000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m26.061855866s)

                                                
                                                
-- stdout --
	* [pause-811000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-811000" primary control-plane node in "pause-811000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-811000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:be:46:43:ab:5f
	* Failed to start hyperkit VM. Running "minikube delete -p pause-811000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:a6:15:75:93:a3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:a6:15:75:93:a3
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-811000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-811000 -n pause-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-811000 -n pause-811000: exit status 7 (86.196414ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 11:58:52.513960    8384 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0725 11:58:52.513982    8384 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-811000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (146.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E0725 12:02:20.180218    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : exit status 90 (1m16.920067842s)

                                                
                                                
-- stdout --
	* [enable-default-cni-691000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "enable-default-cni-691000" primary control-plane node in "enable-default-cni-691000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:02:16.684953    9422 out.go:291] Setting OutFile to fd 1 ...
	I0725 12:02:16.685617    9422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:02:16.685626    9422 out.go:304] Setting ErrFile to fd 2...
	I0725 12:02:16.685632    9422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:02:16.686262    9422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 12:02:16.687879    9422 out.go:298] Setting JSON to false
	I0725 12:02:16.713524    9422 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7306,"bootTime":1721926830,"procs":617,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 12:02:16.713634    9422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 12:02:16.779049    9422 out.go:177] * [enable-default-cni-691000] minikube v1.33.1 on Darwin 14.5
	I0725 12:02:16.841088    9422 notify.go:220] Checking for updates...
	I0725 12:02:16.870762    9422 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 12:02:16.927772    9422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 12:02:16.969790    9422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:02:17.027836    9422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:02:17.086997    9422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 12:02:17.145102    9422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 12:02:17.182957    9422 config.go:182] Loaded profile config "flannel-691000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 12:02:17.183131    9422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 12:02:17.213214    9422 out.go:177] * Using the hyperkit driver based on user configuration
	I0725 12:02:17.254915    9422 start.go:297] selected driver: hyperkit
	I0725 12:02:17.254944    9422 start.go:901] validating driver "hyperkit" against <nil>
	I0725 12:02:17.254967    9422 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:02:17.259205    9422 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:02:17.259315    9422 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 12:02:17.267461    9422 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 12:02:17.271178    9422 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:02:17.271207    9422 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 12:02:17.271245    9422 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0725 12:02:17.271429    9422 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0725 12:02:17.271450    9422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 12:02:17.271503    9422 cni.go:84] Creating CNI manager for "bridge"
	I0725 12:02:17.271513    9422 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 12:02:17.271581    9422 start.go:340] cluster config:
	{Name:enable-default-cni-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 12:02:17.271664    9422 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:02:17.314179    9422 out.go:177] * Starting "enable-default-cni-691000" primary control-plane node in "enable-default-cni-691000" cluster
	I0725 12:02:17.336142    9422 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 12:02:17.336221    9422 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 12:02:17.336248    9422 cache.go:56] Caching tarball of preloaded images
	I0725 12:02:17.336455    9422 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 12:02:17.336474    9422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 12:02:17.336629    9422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/enable-default-cni-691000/config.json ...
	I0725 12:02:17.336665    9422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/enable-default-cni-691000/config.json: {Name:mk638fae1753038db2827c0eb66cf93ddde05788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:02:17.337322    9422 start.go:360] acquireMachinesLock for enable-default-cni-691000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 12:02:17.337437    9422 start.go:364] duration metric: took 88.058µs to acquireMachinesLock for "enable-default-cni-691000"
	I0725 12:02:17.337479    9422 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 12:02:17.337578    9422 start.go:125] createHost starting for "" (driver="hyperkit")
	I0725 12:02:17.359249    9422 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 12:02:17.359509    9422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:02:17.359587    9422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:02:17.369481    9422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55813
	I0725 12:02:17.369893    9422 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:02:17.370303    9422 main.go:141] libmachine: Using API Version  1
	I0725 12:02:17.370311    9422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:02:17.370574    9422 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:02:17.370711    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetMachineName
	I0725 12:02:17.370820    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:17.370928    9422 start.go:159] libmachine.API.Create for "enable-default-cni-691000" (driver="hyperkit")
	I0725 12:02:17.370952    9422 client.go:168] LocalClient.Create starting
	I0725 12:02:17.370989    9422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem
	I0725 12:02:17.371040    9422 main.go:141] libmachine: Decoding PEM data...
	I0725 12:02:17.371057    9422 main.go:141] libmachine: Parsing certificate...
	I0725 12:02:17.371116    9422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem
	I0725 12:02:17.371154    9422 main.go:141] libmachine: Decoding PEM data...
	I0725 12:02:17.371166    9422 main.go:141] libmachine: Parsing certificate...
	I0725 12:02:17.371181    9422 main.go:141] libmachine: Running pre-create checks...
	I0725 12:02:17.371191    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .PreCreateCheck
	I0725 12:02:17.371269    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:17.371421    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetConfigRaw
	I0725 12:02:17.371870    9422 main.go:141] libmachine: Creating machine...
	I0725 12:02:17.371878    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .Create
	I0725 12:02:17.371940    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:17.372061    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | I0725 12:02:17.371936    9430 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 12:02:17.372113    9422 main.go:141] libmachine: (enable-default-cni-691000) Downloading /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 12:02:17.554333    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | I0725 12:02:17.554270    9430 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/id_rsa...
	I0725 12:02:17.668928    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | I0725 12:02:17.668855    9430 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/enable-default-cni-691000.rawdisk...
	I0725 12:02:17.668938    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Writing magic tar header
	I0725 12:02:17.668972    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Writing SSH key tar header
	I0725 12:02:17.670122    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | I0725 12:02:17.669942    9430 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000 ...
	I0725 12:02:18.055830    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:18.055847    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/hyperkit.pid
	I0725 12:02:18.055860    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Using UUID 0c00633a-645c-405b-9bad-c7e25883ab5e
	I0725 12:02:18.081009    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Generated MAC e6:48:a:6f:4e:29
	I0725 12:02:18.081025    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-691000
	I0725 12:02:18.081060    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c00633a-645c-405b-9bad-c7e25883ab5e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 12:02:18.081091    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c00633a-645c-405b-9bad-c7e25883ab5e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 12:02:18.081138    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c00633a-645c-405b-9bad-c7e25883ab5e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/enable-default-cni-691000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/en
able-default-cni-691000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-691000"}
	I0725 12:02:18.081182    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c00633a-645c-405b-9bad-c7e25883ab5e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/enable-default-cni-691000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/bzimage,/Users/jenkins/minikube-integr
ation/19326-1195/.minikube/machines/enable-default-cni-691000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-691000"
	I0725 12:02:18.081190    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 12:02:18.084198    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 DEBUG: hyperkit: Pid is 9432
	I0725 12:02:18.085377    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 0
	I0725 12:02:18.085397    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:18.085441    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:18.086651    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:18.086713    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0725 12:02:18.086737    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:4e:31:3f:c4:c6:a2 ID:1,4e:31:3f:c4:c6:a2 Lease:0x66a3f266}
	I0725 12:02:18.086753    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:d2:a4:da:88:7d:fc ID:1,d2:a4:da:88:7d:fc Lease:0x66a3f258}
	I0725 12:02:18.086763    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:d6:43:4a:45:96:f7 ID:1,d6:43:4a:45:96:f7 Lease:0x66a2a0cc}
	I0725 12:02:18.086789    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:60:7f:18:5d:c0 ID:1,5e:60:7f:18:5d:c0 Lease:0x66a3f208}
	I0725 12:02:18.086815    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:8e:ac:48:b0:10:16 ID:1,8e:ac:48:b0:10:16 Lease:0x66a2a09d}
	I0725 12:02:18.086827    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:da:73:d8:bd:98:be ID:1,da:73:d8:bd:98:be Lease:0x66a3f1a6}
	I0725 12:02:18.086834    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3eece}
	I0725 12:02:18.086844    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:b6:78:b1:1d:c7 ID:1,52:b6:78:b1:1d:c7 Lease:0x66a3ec01}
	I0725 12:02:18.086851    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 12:02:18.086859    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 12:02:18.086884    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 12:02:18.086896    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 12:02:18.086914    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 12:02:18.086930    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 12:02:18.086949    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 12:02:18.086958    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 12:02:18.086965    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 12:02:18.086974    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 12:02:18.086980    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 12:02:18.086988    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 12:02:18.086995    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 12:02:18.087001    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 12:02:18.087008    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 12:02:18.087015    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 12:02:18.087024    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 12:02:18.087032    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 12:02:18.091798    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 12:02:18.101720    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 12:02:18.102483    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 12:02:18.102502    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 12:02:18.102514    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 12:02:18.102524    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 12:02:18.512124    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 12:02:18.512141    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 12:02:18.627621    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 12:02:18.627637    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 12:02:18.627644    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 12:02:18.627650    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 12:02:18.628223    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 12:02:18.628235    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 12:02:20.087973    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 1
	I0725 12:02:20.087990    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:20.088037    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:20.089052    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:20.089114    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0725 12:02:20.089123    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:4e:31:3f:c4:c6:a2 ID:1,4e:31:3f:c4:c6:a2 Lease:0x66a3f266}
	I0725 12:02:20.089133    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:d2:a4:da:88:7d:fc ID:1,d2:a4:da:88:7d:fc Lease:0x66a3f258}
	I0725 12:02:20.089140    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:d6:43:4a:45:96:f7 ID:1,d6:43:4a:45:96:f7 Lease:0x66a2a0cc}
	I0725 12:02:20.089146    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:60:7f:18:5d:c0 ID:1,5e:60:7f:18:5d:c0 Lease:0x66a3f208}
	I0725 12:02:20.089156    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:8e:ac:48:b0:10:16 ID:1,8e:ac:48:b0:10:16 Lease:0x66a2a09d}
	I0725 12:02:20.089164    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:da:73:d8:bd:98:be ID:1,da:73:d8:bd:98:be Lease:0x66a3f1a6}
	I0725 12:02:20.089170    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3eece}
	I0725 12:02:20.089186    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:b6:78:b1:1d:c7 ID:1,52:b6:78:b1:1d:c7 Lease:0x66a3ec01}
	I0725 12:02:20.089194    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 12:02:20.089201    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 12:02:20.089207    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 12:02:20.089226    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 12:02:20.089239    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 12:02:20.089249    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 12:02:20.089257    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 12:02:20.089273    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 12:02:20.089281    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 12:02:20.089288    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 12:02:20.089294    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 12:02:20.089302    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 12:02:20.089309    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 12:02:20.089316    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 12:02:20.089324    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 12:02:20.089331    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 12:02:20.089341    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 12:02:20.089350    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 12:02:22.089906    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 2
	I0725 12:02:22.089920    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:22.090092    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:22.090944    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:22.091007    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0725 12:02:22.091017    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:4e:31:3f:c4:c6:a2 ID:1,4e:31:3f:c4:c6:a2 Lease:0x66a3f266}
	I0725 12:02:22.091030    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:d2:a4:da:88:7d:fc ID:1,d2:a4:da:88:7d:fc Lease:0x66a3f258}
	I0725 12:02:22.091037    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:d6:43:4a:45:96:f7 ID:1,d6:43:4a:45:96:f7 Lease:0x66a2a0cc}
	I0725 12:02:22.091043    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:60:7f:18:5d:c0 ID:1,5e:60:7f:18:5d:c0 Lease:0x66a3f208}
	I0725 12:02:22.091049    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:8e:ac:48:b0:10:16 ID:1,8e:ac:48:b0:10:16 Lease:0x66a2a09d}
	I0725 12:02:22.091055    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:da:73:d8:bd:98:be ID:1,da:73:d8:bd:98:be Lease:0x66a3f1a6}
	I0725 12:02:22.091066    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3eece}
	I0725 12:02:22.091121    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:b6:78:b1:1d:c7 ID:1,52:b6:78:b1:1d:c7 Lease:0x66a3ec01}
	I0725 12:02:22.091144    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 12:02:22.091153    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 12:02:22.091161    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 12:02:22.091171    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 12:02:22.091179    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 12:02:22.091186    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 12:02:22.091194    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 12:02:22.091200    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 12:02:22.091208    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 12:02:22.091216    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 12:02:22.091223    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 12:02:22.091230    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 12:02:22.091243    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 12:02:22.091251    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 12:02:22.091257    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 12:02:22.091271    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 12:02:22.091298    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 12:02:22.091308    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 12:02:24.091725    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 3
	I0725 12:02:24.091739    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:24.091807    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:24.092643    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:24.092704    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0725 12:02:24.092713    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:4e:31:3f:c4:c6:a2 ID:1,4e:31:3f:c4:c6:a2 Lease:0x66a3f266}
	I0725 12:02:24.092724    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:d2:a4:da:88:7d:fc ID:1,d2:a4:da:88:7d:fc Lease:0x66a3f258}
	I0725 12:02:24.092730    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:d6:43:4a:45:96:f7 ID:1,d6:43:4a:45:96:f7 Lease:0x66a2a0cc}
	I0725 12:02:24.092737    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:60:7f:18:5d:c0 ID:1,5e:60:7f:18:5d:c0 Lease:0x66a3f208}
	I0725 12:02:24.092743    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:8e:ac:48:b0:10:16 ID:1,8e:ac:48:b0:10:16 Lease:0x66a2a09d}
	I0725 12:02:24.092763    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:da:73:d8:bd:98:be ID:1,da:73:d8:bd:98:be Lease:0x66a3f1a6}
	I0725 12:02:24.092778    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3eece}
	I0725 12:02:24.092786    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:b6:78:b1:1d:c7 ID:1,52:b6:78:b1:1d:c7 Lease:0x66a3ec01}
	I0725 12:02:24.092794    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 12:02:24.092802    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 12:02:24.092811    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 12:02:24.092818    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 12:02:24.092826    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 12:02:24.092847    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 12:02:24.092859    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 12:02:24.092867    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 12:02:24.092879    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 12:02:24.092893    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 12:02:24.092904    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 12:02:24.092912    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 12:02:24.092922    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 12:02:24.092939    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 12:02:24.092953    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 12:02:24.092961    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 12:02:24.092968    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 12:02:24.092976    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 12:02:24.267920    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 12:02:24.267950    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 12:02:24.267957    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 12:02:24.292039    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | 2024/07/25 12:02:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 12:02:26.092882    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 4
	I0725 12:02:26.092899    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:26.093004    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:26.093791    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:26.093853    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0725 12:02:26.093861    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:4e:31:3f:c4:c6:a2 ID:1,4e:31:3f:c4:c6:a2 Lease:0x66a3f266}
	I0725 12:02:26.093882    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:d2:a4:da:88:7d:fc ID:1,d2:a4:da:88:7d:fc Lease:0x66a3f258}
	I0725 12:02:26.093900    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:d6:43:4a:45:96:f7 ID:1,d6:43:4a:45:96:f7 Lease:0x66a2a0cc}
	I0725 12:02:26.093907    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:60:7f:18:5d:c0 ID:1,5e:60:7f:18:5d:c0 Lease:0x66a3f208}
	I0725 12:02:26.093913    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:8e:ac:48:b0:10:16 ID:1,8e:ac:48:b0:10:16 Lease:0x66a2a09d}
	I0725 12:02:26.093928    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:da:73:d8:bd:98:be ID:1,da:73:d8:bd:98:be Lease:0x66a3f1a6}
	I0725 12:02:26.093952    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:92:a3:2a:d3:ff:2b ID:1,92:a3:2a:d3:ff:2b Lease:0x66a3eece}
	I0725 12:02:26.093959    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:b6:78:b1:1d:c7 ID:1,52:b6:78:b1:1d:c7 Lease:0x66a3ec01}
	I0725 12:02:26.093972    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:96:fa:ee:ec:8e:9 ID:1,96:fa:ee:ec:8e:9 Lease:0x66a3e882}
	I0725 12:02:26.093980    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:22:be:18:bf:a:bd ID:1,22:be:18:bf:a:bd Lease:0x66a3e7aa}
	I0725 12:02:26.093986    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:8e:2a:d7:f7:8c ID:1,5e:8e:2a:d7:f7:8c Lease:0x66a295b1}
	I0725 12:02:26.093993    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:66:df:1d:ad:7d:51 ID:1,66:df:1d:ad:7d:51 Lease:0x66a294c1}
	I0725 12:02:26.094008    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:9:2d:8a:7:32 ID:1,26:9:2d:8a:7:32 Lease:0x66a3e6e8}
	I0725 12:02:26.094016    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:55:6:28:38:f4 ID:1,4e:55:6:28:38:f4 Lease:0x66a3e65d}
	I0725 12:02:26.094022    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:ea:8e:25:17:f3:ee ID:1,ea:8e:25:17:f3:ee Lease:0x66a29288}
	I0725 12:02:26.094043    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:4e:99:be:b0:9f:77 ID:1,4e:99:be:b0:9f:77 Lease:0x66a3e3bf}
	I0725 12:02:26.094050    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:d3:86:5c:14:aa ID:1,a2:d3:86:5c:14:aa Lease:0x66a3e359}
	I0725 12:02:26.094056    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:3a:5a:91:86:90:28 ID:1,3a:5a:91:86:90:28 Lease:0x66a3e32b}
	I0725 12:02:26.094072    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:16:64:41:fc:f9:b6 ID:1,16:64:41:fc:f9:b6 Lease:0x66a3e2c5}
	I0725 12:02:26.094078    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ba:e9:ef:e5:fe:75 ID:1,ba:e9:ef:e5:fe:75 Lease:0x66a3e2ab}
	I0725 12:02:26.094086    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:df:a:a6:c4:51 ID:1,f2:df:a:a6:c4:51 Lease:0x66a290b7}
	I0725 12:02:26.094093    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c2:64:80:a8:d2:48 ID:1,c2:64:80:a8:d2:48 Lease:0x66a3e272}
	I0725 12:02:26.094100    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:76:82:a1:51:13 ID:1,52:76:82:a1:51:13 Lease:0x66a3e260}
	I0725 12:02:26.094108    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fa:f3:8f:28:48:b6 ID:1,fa:f3:8f:28:48:b6 Lease:0x66a3df23}
	I0725 12:02:26.094126    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:a6:7d:ca:8:70:29 ID:1,a6:7d:ca:8:70:29 Lease:0x66a3de5e}
	I0725 12:02:26.094137    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:96:32:b3:b:eb:cb ID:1,96:32:b3:b:eb:cb Lease:0x66a3dcda}
	I0725 12:02:28.094351    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Attempt 5
	I0725 12:02:28.094369    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:28.094474    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:28.095261    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Searching for e6:48:a:6f:4e:29 in /var/db/dhcpd_leases ...
	I0725 12:02:28.095326    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0725 12:02:28.095335    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e6:48:a:6f:4e:29 ID:1,e6:48:a:6f:4e:29 Lease:0x66a3f2c3}
	I0725 12:02:28.095341    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | Found match: e6:48:a:6f:4e:29
	I0725 12:02:28.095346    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | IP: 192.169.0.28
	I0725 12:02:28.095414    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetConfigRaw
	I0725 12:02:28.096049    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:28.096164    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:28.096266    9422 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 12:02:28.096275    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetState
	I0725 12:02:28.096354    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:02:28.096419    9422 main.go:141] libmachine: (enable-default-cni-691000) DBG | hyperkit pid from json: 9432
	I0725 12:02:28.097211    9422 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 12:02:28.097222    9422 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 12:02:28.097230    9422 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 12:02:28.097237    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:28.097320    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:28.097408    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:28.097501    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:28.097587    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:28.097706    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:28.097914    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:28.097921    9422 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 12:02:29.160545    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 12:02:29.160562    9422 main.go:141] libmachine: Detecting the provisioner...
	I0725 12:02:29.160572    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.160719    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.160821    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.160916    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.161025    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.161157    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.161301    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.161309    9422 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 12:02:29.225217    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 12:02:29.225271    9422 main.go:141] libmachine: found compatible host: buildroot
	I0725 12:02:29.225277    9422 main.go:141] libmachine: Provisioning with buildroot...
	I0725 12:02:29.225283    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetMachineName
	I0725 12:02:29.225419    9422 buildroot.go:166] provisioning hostname "enable-default-cni-691000"
	I0725 12:02:29.225431    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetMachineName
	I0725 12:02:29.225529    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.225617    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.225690    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.225812    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.225905    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.226027    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.226167    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.226176    9422 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-691000 && echo "enable-default-cni-691000" | sudo tee /etc/hostname
	I0725 12:02:29.300597    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-691000
	
	I0725 12:02:29.300617    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.300753    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.300856    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.300934    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.301029    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.301162    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.301327    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.301340    9422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-691000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-691000/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-691000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 12:02:29.371550    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 12:02:29.371572    9422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 12:02:29.371583    9422 buildroot.go:174] setting up certificates
	I0725 12:02:29.371615    9422 provision.go:84] configureAuth start
	I0725 12:02:29.371625    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetMachineName
	I0725 12:02:29.371771    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetIP
	I0725 12:02:29.371878    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.371969    9422 provision.go:143] copyHostCerts
	I0725 12:02:29.372062    9422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 12:02:29.372072    9422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 12:02:29.372228    9422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 12:02:29.372469    9422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 12:02:29.372476    9422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 12:02:29.372561    9422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 12:02:29.372744    9422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 12:02:29.372750    9422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 12:02:29.372831    9422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 12:02:29.372982    9422 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-691000 san=[127.0.0.1 192.169.0.28 enable-default-cni-691000 localhost minikube]
	I0725 12:02:29.561944    9422 provision.go:177] copyRemoteCerts
	I0725 12:02:29.561997    9422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 12:02:29.562014    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.562157    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.562250    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.562334    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.562433    9422 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/id_rsa Username:docker}
	I0725 12:02:29.602357    9422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 12:02:29.622093    9422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 12:02:29.642126    9422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 12:02:29.661587    9422 provision.go:87] duration metric: took 289.957187ms to configureAuth
	I0725 12:02:29.661600    9422 buildroot.go:189] setting minikube options for container-runtime
	I0725 12:02:29.661736    9422 config.go:182] Loaded profile config "enable-default-cni-691000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 12:02:29.661754    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:29.661888    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.661978    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.662056    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.662139    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.662216    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.662328    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.662451    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.662459    9422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 12:02:29.728133    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 12:02:29.728150    9422 buildroot.go:70] root file system type: tmpfs
	I0725 12:02:29.728223    9422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 12:02:29.728236    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.728379    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.728482    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.728589    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.728700    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.728855    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.728995    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.729040    9422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 12:02:29.801920    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 12:02:29.801942    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:29.802087    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:29.802182    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.802289    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:29.802373    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:29.802500    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:29.802648    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:29.802660    9422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 12:02:31.346690    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 12:02:31.346710    9422 main.go:141] libmachine: Checking connection to Docker...
	I0725 12:02:31.346716    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetURL
	I0725 12:02:31.346867    9422 main.go:141] libmachine: Docker is up and running!
	I0725 12:02:31.346873    9422 main.go:141] libmachine: Reticulating splines...
	I0725 12:02:31.346878    9422 client.go:171] duration metric: took 13.975912839s to LocalClient.Create
	I0725 12:02:31.346888    9422 start.go:167] duration metric: took 13.975954355s to libmachine.API.Create "enable-default-cni-691000"
	I0725 12:02:31.346900    9422 start.go:293] postStartSetup for "enable-default-cni-691000" (driver="hyperkit")
	I0725 12:02:31.346907    9422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 12:02:31.346916    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:31.347047    9422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 12:02:31.347060    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:31.347170    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:31.347272    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:31.347370    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:31.347485    9422 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/id_rsa Username:docker}
	I0725 12:02:31.393179    9422 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 12:02:31.396761    9422 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 12:02:31.396775    9422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 12:02:31.396875    9422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 12:02:31.397069    9422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 12:02:31.397276    9422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 12:02:31.410104    9422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 12:02:31.432028    9422 start.go:296] duration metric: took 85.118522ms for postStartSetup
	I0725 12:02:31.432053    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetConfigRaw
	I0725 12:02:31.432708    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetIP
	I0725 12:02:31.432872    9422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/enable-default-cni-691000/config.json ...
	I0725 12:02:31.433233    9422 start.go:128] duration metric: took 14.095633542s to createHost
	I0725 12:02:31.433247    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:31.433348    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:31.433451    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:31.433552    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:31.433661    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:31.433777    9422 main.go:141] libmachine: Using SSH client type: native
	I0725 12:02:31.433906    9422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f70c0] 0x102f9e20 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I0725 12:02:31.433913    9422 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 12:02:31.499864    9422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721934150.961933967
	
	I0725 12:02:31.499878    9422 fix.go:216] guest clock: 1721934150.961933967
	I0725 12:02:31.499883    9422 fix.go:229] Guest: 2024-07-25 12:02:30.961933967 -0700 PDT Remote: 2024-07-25 12:02:31.433241 -0700 PDT m=+14.784398985 (delta=-471.307033ms)
	I0725 12:02:31.499907    9422 fix.go:200] guest clock delta is within tolerance: -471.307033ms
	I0725 12:02:31.499911    9422 start.go:83] releasing machines lock for "enable-default-cni-691000", held for 14.162455718s
	I0725 12:02:31.499931    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:31.500089    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetIP
	I0725 12:02:31.500215    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:31.500591    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:31.500717    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .DriverName
	I0725 12:02:31.500809    9422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 12:02:31.500841    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:31.500870    9422 ssh_runner.go:195] Run: cat /version.json
	I0725 12:02:31.500883    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHHostname
	I0725 12:02:31.500954    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:31.500974    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHPort
	I0725 12:02:31.501089    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:31.501111    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHKeyPath
	I0725 12:02:31.501226    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:31.501240    9422 main.go:141] libmachine: (enable-default-cni-691000) Calling .GetSSHUsername
	I0725 12:02:31.501399    9422 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/id_rsa Username:docker}
	I0725 12:02:31.501426    9422 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/enable-default-cni-691000/id_rsa Username:docker}
	I0725 12:02:31.536679    9422 ssh_runner.go:195] Run: systemctl --version
	I0725 12:02:31.583730    9422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 12:02:31.588140    9422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 12:02:31.588200    9422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 12:02:31.602604    9422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 12:02:31.602622    9422 start.go:495] detecting cgroup driver to use...
	I0725 12:02:31.602735    9422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:02:31.618065    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 12:02:31.627448    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 12:02:31.637255    9422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 12:02:31.637310    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 12:02:31.646973    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 12:02:31.656391    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 12:02:31.665849    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 12:02:31.675196    9422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 12:02:31.684894    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 12:02:31.694149    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 12:02:31.702969    9422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 12:02:31.711805    9422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 12:02:31.719885    9422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 12:02:31.728476    9422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:02:31.823705    9422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 12:02:31.840934    9422 start.go:495] detecting cgroup driver to use...
	I0725 12:02:31.841023    9422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 12:02:31.857681    9422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 12:02:31.871166    9422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 12:02:31.892958    9422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 12:02:31.905621    9422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:02:31.917254    9422 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 12:02:31.940868    9422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:02:31.953452    9422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:02:31.969646    9422 ssh_runner.go:195] Run: which cri-dockerd
	I0725 12:02:31.972708    9422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 12:02:31.981362    9422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 12:02:31.996151    9422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 12:02:32.091197    9422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 12:02:32.201769    9422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 12:02:32.201852    9422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 12:02:32.216712    9422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:02:32.326118    9422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 12:03:33.348249    9422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.022078516s)
	I0725 12:03:33.348321    9422 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 12:03:33.400231    9422 out.go:177] 
	W0725 12:03:33.422035    9422 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 19:02:29 enable-default-cni-691000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.582052997Z" level=info msg="Starting up"
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.582513659Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.583258780Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=526
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.600349447Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.615972919Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.615994688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616029423Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616039398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616092904Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616102815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616221743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616258007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616270365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616277301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616334740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616486208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.617951216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.617967290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618049851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618083458Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618152962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618194163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621034064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621086230Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621099955Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621110363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621128049Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621199569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621353441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621445141Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621478945Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621492028Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621501506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621509737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621518312Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621527751Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621543408Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621558433Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621568927Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621576556Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621591030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621600628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621608578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621623541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621635295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621651171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621661586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621669789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621678238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621687364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622850876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622897904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622912583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622924034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622939021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622947405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622954612Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623041082Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623076238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623086094Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623094387Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623100934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623109051Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623115517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623330231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623399407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623451046Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623463711Z" level=info msg="containerd successfully booted in 0.023749s"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.603723271Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.607934599Z" level=info msg="Loading containers: start."
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.691297782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.771656044Z" level=info msg="Loading containers: done."
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.778853925Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.778988871Z" level=info msg="Daemon has completed initialization"
	Jul 25 19:02:30 enable-default-cni-691000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.808991578Z" level=info msg="API listen on [::]:2376"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.809107970Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.801504135Z" level=info msg="Processing signal 'terminated'"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802385019Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802471727Z" level=info msg="Daemon shutdown complete"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802528054Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802588931Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 19:02:31 enable-default-cni-691000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:02:32 enable-default-cni-691000 dockerd[919]: time="2024-07-25T19:02:32.839028701Z" level=info msg="Starting up"
	Jul 25 19:03:33 enable-default-cni-691000 dockerd[919]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 19:02:29 enable-default-cni-691000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.582052997Z" level=info msg="Starting up"
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.582513659Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:29.583258780Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=526
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.600349447Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.615972919Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.615994688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616029423Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616039398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616092904Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616102815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616221743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616258007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616270365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616277301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616334740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.616486208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.617951216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.617967290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618049851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618083458Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618152962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.618194163Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621034064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621086230Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621099955Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621110363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621128049Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621199569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621353441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621445141Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621478945Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621492028Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621501506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621509737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621518312Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621527751Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621543408Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621558433Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621568927Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621576556Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621591030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621600628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621608578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621623541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621635295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621651171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621661586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621669789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621678238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.621687364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622850876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622897904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622912583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622924034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622939021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622947405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.622954612Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623041082Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623076238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623086094Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623094387Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623100934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623109051Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623115517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623330231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623399407Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623451046Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 19:02:29 enable-default-cni-691000 dockerd[526]: time="2024-07-25T19:02:29.623463711Z" level=info msg="containerd successfully booted in 0.023749s"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.603723271Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.607934599Z" level=info msg="Loading containers: start."
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.691297782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.771656044Z" level=info msg="Loading containers: done."
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.778853925Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.778988871Z" level=info msg="Daemon has completed initialization"
	Jul 25 19:02:30 enable-default-cni-691000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.808991578Z" level=info msg="API listen on [::]:2376"
	Jul 25 19:02:30 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:30.809107970Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.801504135Z" level=info msg="Processing signal 'terminated'"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802385019Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802471727Z" level=info msg="Daemon shutdown complete"
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802528054Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 19:02:31 enable-default-cni-691000 dockerd[519]: time="2024-07-25T19:02:31.802588931Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 19:02:31 enable-default-cni-691000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 19:02:32 enable-default-cni-691000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:02:32 enable-default-cni-691000 dockerd[919]: time="2024-07-25T19:02:32.839028701Z" level=info msg="Starting up"
	Jul 25 19:03:33 enable-default-cni-691000 dockerd[919]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 19:03:33 enable-default-cni-691000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 12:03:33.422136    9422 out.go:239] * 
	* 
	W0725 12:03:33.423419    9422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:03:33.485861    9422 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (76.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (77.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-620000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0725 12:20:46.178706    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:47.049356    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:21:24.673623    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 12:21:27.134418    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:21:27.140064    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:21:32.671769    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:21:40.826827    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-620000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: exit status 90 (1m17.715629828s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-620000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "default-k8s-diff-port-620000" primary control-plane node in "default-k8s-diff-port-620000" cluster
	* Restarting existing hyperkit VM for "default-k8s-diff-port-620000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:20:28.119595   11267 out.go:291] Setting OutFile to fd 1 ...
	I0725 12:20:28.119779   11267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:20:28.119783   11267 out.go:304] Setting ErrFile to fd 2...
	I0725 12:20:28.119787   11267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:20:28.119967   11267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 12:20:28.121367   11267 out.go:298] Setting JSON to false
	I0725 12:20:28.145543   11267 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8398,"bootTime":1721926830,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 12:20:28.145630   11267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 12:20:28.168400   11267 out.go:177] * [default-k8s-diff-port-620000] minikube v1.33.1 on Darwin 14.5
	I0725 12:20:28.210491   11267 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 12:20:28.210550   11267 notify.go:220] Checking for updates...
	I0725 12:20:28.253312   11267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 12:20:28.274417   11267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:20:28.295275   11267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:20:28.316289   11267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 12:20:28.337523   11267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 12:20:28.359210   11267 config.go:182] Loaded profile config "default-k8s-diff-port-620000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 12:20:28.359864   11267 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:20:28.359954   11267 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:20:28.369714   11267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58337
	I0725 12:20:28.370066   11267 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:20:28.370505   11267 main.go:141] libmachine: Using API Version  1
	I0725 12:20:28.370515   11267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:20:28.370723   11267 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:20:28.370851   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:28.371049   11267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 12:20:28.371286   11267 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:20:28.371312   11267 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:20:28.379709   11267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58339
	I0725 12:20:28.380040   11267 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:20:28.380344   11267 main.go:141] libmachine: Using API Version  1
	I0725 12:20:28.380351   11267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:20:28.380549   11267 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:20:28.380661   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:28.409498   11267 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 12:20:28.451173   11267 start.go:297] selected driver: hyperkit
	I0725 12:20:28.451202   11267 start.go:901] validating driver "hyperkit" against &{Name:default-k8s-diff-port-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.37 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 12:20:28.451398   11267 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:20:28.456422   11267 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:20:28.456544   11267 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 12:20:28.464820   11267 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 12:20:28.469410   11267 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:20:28.469435   11267 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 12:20:28.469582   11267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 12:20:28.469610   11267 cni.go:84] Creating CNI manager for ""
	I0725 12:20:28.469623   11267 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 12:20:28.469667   11267 start.go:340] cluster config:
	{Name:default-k8s-diff-port-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-620000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.37 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 12:20:28.469763   11267 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:20:28.512261   11267 out.go:177] * Starting "default-k8s-diff-port-620000" primary control-plane node in "default-k8s-diff-port-620000" cluster
	I0725 12:20:28.533557   11267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 12:20:28.533629   11267 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 12:20:28.533674   11267 cache.go:56] Caching tarball of preloaded images
	I0725 12:20:28.533903   11267 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 12:20:28.533922   11267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 12:20:28.534063   11267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/default-k8s-diff-port-620000/config.json ...
	I0725 12:20:28.535029   11267 start.go:360] acquireMachinesLock for default-k8s-diff-port-620000: {Name:mkc80e2458edb2366d10de0058accb31067397d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 12:20:28.535157   11267 start.go:364] duration metric: took 102.497µs to acquireMachinesLock for "default-k8s-diff-port-620000"
	I0725 12:20:28.535193   11267 start.go:96] Skipping create...Using existing machine configuration
	I0725 12:20:28.535228   11267 fix.go:54] fixHost starting: 
	I0725 12:20:28.535646   11267 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:20:28.535687   11267 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:20:28.544702   11267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58341
	I0725 12:20:28.545041   11267 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:20:28.545389   11267 main.go:141] libmachine: Using API Version  1
	I0725 12:20:28.545428   11267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:20:28.545676   11267 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:20:28.545807   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:28.545908   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetState
	I0725 12:20:28.545999   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:20:28.546073   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | hyperkit pid from json: 11188
	I0725 12:20:28.547022   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | hyperkit pid 11188 missing from process table
	I0725 12:20:28.547070   11267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-620000: state=Stopped err=<nil>
	I0725 12:20:28.547086   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	W0725 12:20:28.547159   11267 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 12:20:28.589492   11267 out.go:177] * Restarting existing hyperkit VM for "default-k8s-diff-port-620000" ...
	I0725 12:20:28.612287   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .Start
	I0725 12:20:28.612600   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:20:28.612656   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/hyperkit.pid
	I0725 12:20:28.614080   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | hyperkit pid 11188 missing from process table
	I0725 12:20:28.614107   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | pid 11188 is in state "Stopped"
	I0725 12:20:28.614123   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/hyperkit.pid...
	I0725 12:20:28.614298   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Using UUID 594b7edf-1523-43eb-ae51-084ef1492300
	I0725 12:20:28.639481   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Generated MAC f2:a3:70:cf:13:4
	I0725 12:20:28.639504   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-620000
	I0725 12:20:28.639670   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"594b7edf-1523-43eb-ae51-084ef1492300", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000385650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 12:20:28.639702   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"594b7edf-1523-43eb-ae51-084ef1492300", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000385650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0725 12:20:28.639761   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "594b7edf-1523-43eb-ae51-084ef1492300", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/default-k8s-diff-port-620000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19326-1195/
.minikube/machines/default-k8s-diff-port-620000/bzimage,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-620000"}
	I0725 12:20:28.639805   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 594b7edf-1523-43eb-ae51-084ef1492300 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/default-k8s-diff-port-620000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/tty,log=/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/console-ring -f kexec,/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/bzimage,/Users
/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-620000"
	I0725 12:20:28.639818   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0725 12:20:28.641321   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 DEBUG: hyperkit: Pid is 11278
	I0725 12:20:28.642024   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Attempt 0
	I0725 12:20:28.642046   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:20:28.642133   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | hyperkit pid from json: 11278
	I0725 12:20:28.643758   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Searching for f2:a3:70:cf:13:4 in /var/db/dhcpd_leases ...
	I0725 12:20:28.643865   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0725 12:20:28.643878   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:8e:a6:b8:d4:d8:a2 ID:1,8e:a6:b8:d4:d8:a2 Lease:0x66a3f65c}
	I0725 12:20:28.643920   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:f2:a3:70:cf:13:4 ID:1,f2:a3:70:cf:13:4 Lease:0x66a3f64e}
	I0725 12:20:28.643937   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | Found match: f2:a3:70:cf:13:4
	I0725 12:20:28.643963   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | IP: 192.169.0.37
	I0725 12:20:28.643980   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetConfigRaw
	I0725 12:20:28.644705   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetIP
	I0725 12:20:28.644855   11267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/default-k8s-diff-port-620000/config.json ...
	I0725 12:20:28.645288   11267 machine.go:94] provisionDockerMachine start ...
	I0725 12:20:28.645298   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:28.645414   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:28.645514   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:28.645600   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:28.645716   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:28.645834   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:28.646019   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:28.646215   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:28.646227   11267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 12:20:28.648988   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0725 12:20:28.657657   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0725 12:20:28.658738   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 12:20:28.658764   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 12:20:28.658774   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 12:20:28.658788   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 12:20:29.048494   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0725 12:20:29.048510   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0725 12:20:29.163430   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0725 12:20:29.163454   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0725 12:20:29.163470   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0725 12:20:29.163483   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0725 12:20:29.164233   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0725 12:20:29.164243   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0725 12:20:34.765811   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0725 12:20:34.765853   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0725 12:20:34.765863   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0725 12:20:34.789466   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | 2024/07/25 12:20:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0725 12:20:41.810232   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 12:20:41.810258   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetMachineName
	I0725 12:20:41.810430   11267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-620000"
	I0725 12:20:41.810442   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetMachineName
	I0725 12:20:41.810532   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:41.810617   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:41.810706   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:41.810798   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:41.810878   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:41.811009   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:41.811180   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:41.811190   11267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-620000 && echo "default-k8s-diff-port-620000" | sudo tee /etc/hostname
	I0725 12:20:41.877249   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-620000
	
	I0725 12:20:41.877269   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:41.877422   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:41.877518   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:41.877611   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:41.877701   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:41.877843   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:41.877982   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:41.877996   11267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-620000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-620000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-620000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 12:20:41.940462   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 12:20:41.940486   11267 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1195/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1195/.minikube}
	I0725 12:20:41.940504   11267 buildroot.go:174] setting up certificates
	I0725 12:20:41.940511   11267 provision.go:84] configureAuth start
	I0725 12:20:41.940519   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetMachineName
	I0725 12:20:41.940652   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetIP
	I0725 12:20:41.940755   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:41.940838   11267 provision.go:143] copyHostCerts
	I0725 12:20:41.940932   11267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem, removing ...
	I0725 12:20:41.940942   11267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem
	I0725 12:20:41.941310   11267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/ca.pem (1078 bytes)
	I0725 12:20:41.941564   11267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem, removing ...
	I0725 12:20:41.941570   11267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem
	I0725 12:20:41.941656   11267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/cert.pem (1123 bytes)
	I0725 12:20:41.941830   11267 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem, removing ...
	I0725 12:20:41.941836   11267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem
	I0725 12:20:41.941917   11267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1195/.minikube/key.pem (1679 bytes)
	I0725 12:20:41.942058   11267 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-620000 san=[127.0.0.1 192.169.0.37 default-k8s-diff-port-620000 localhost minikube]
	I0725 12:20:42.005623   11267 provision.go:177] copyRemoteCerts
	I0725 12:20:42.005677   11267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 12:20:42.005698   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:42.005833   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:42.005938   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.006043   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:42.006130   11267 sshutil.go:53] new ssh client: &{IP:192.169.0.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/id_rsa Username:docker}
	I0725 12:20:42.040756   11267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 12:20:42.059697   11267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 12:20:42.078718   11267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 12:20:42.097565   11267 provision.go:87] duration metric: took 157.036992ms to configureAuth
	I0725 12:20:42.097578   11267 buildroot.go:189] setting minikube options for container-runtime
	I0725 12:20:42.097725   11267 config.go:182] Loaded profile config "default-k8s-diff-port-620000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 12:20:42.097738   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:42.097869   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:42.097957   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:42.098045   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.098128   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.098221   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:42.098336   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:42.098458   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:42.098465   11267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 12:20:42.151909   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 12:20:42.151925   11267 buildroot.go:70] root file system type: tmpfs
	I0725 12:20:42.151998   11267 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 12:20:42.152014   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:42.152152   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:42.152248   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.152334   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.152423   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:42.152544   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:42.152691   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:42.152735   11267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 12:20:42.217584   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 12:20:42.217606   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:42.217743   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:42.217841   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.217927   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:42.218006   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:42.218129   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:42.218274   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:42.218286   11267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 12:20:43.826552   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 12:20:43.826565   11267 machine.go:97] duration metric: took 15.181000003s to provisionDockerMachine
	I0725 12:20:43.826578   11267 start.go:293] postStartSetup for "default-k8s-diff-port-620000" (driver="hyperkit")
	I0725 12:20:43.826585   11267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 12:20:43.826596   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:43.826788   11267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 12:20:43.826802   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:43.826890   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:43.826973   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:43.827065   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:43.827143   11267 sshutil.go:53] new ssh client: &{IP:192.169.0.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/id_rsa Username:docker}
	I0725 12:20:43.863592   11267 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 12:20:43.867932   11267 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 12:20:43.867950   11267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/addons for local assets ...
	I0725 12:20:43.868070   11267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1195/.minikube/files for local assets ...
	I0725 12:20:43.868277   11267 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem -> 17322.pem in /etc/ssl/certs
	I0725 12:20:43.868499   11267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 12:20:43.878999   11267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/ssl/certs/17322.pem --> /etc/ssl/certs/17322.pem (1708 bytes)
	I0725 12:20:43.912533   11267 start.go:296] duration metric: took 85.944105ms for postStartSetup
	I0725 12:20:43.912558   11267 fix.go:56] duration metric: took 15.377081169s for fixHost
	I0725 12:20:43.912570   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:43.912703   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:43.912790   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:43.912889   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:43.912972   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:43.913098   11267 main.go:141] libmachine: Using SSH client type: native
	I0725 12:20:43.913230   11267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x21cd0c0] 0x21cfe20 <nil>  [] 0s} 192.169.0.37 22 <nil> <nil>}
	I0725 12:20:43.913237   11267 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 12:20:43.967302   11267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721935243.892355981
	
	I0725 12:20:43.967315   11267 fix.go:216] guest clock: 1721935243.892355981
	I0725 12:20:43.967320   11267 fix.go:229] Guest: 2024-07-25 12:20:43.892355981 -0700 PDT Remote: 2024-07-25 12:20:43.91256 -0700 PDT m=+15.827488262 (delta=-20.204019ms)
	I0725 12:20:43.967340   11267 fix.go:200] guest clock delta is within tolerance: -20.204019ms
	I0725 12:20:43.967344   11267 start.go:83] releasing machines lock for "default-k8s-diff-port-620000", held for 15.431901393s
	I0725 12:20:43.967361   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:43.967491   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetIP
	I0725 12:20:43.967595   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:43.967925   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:43.968046   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:20:43.968196   11267 ssh_runner.go:195] Run: cat /version.json
	I0725 12:20:43.968209   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:43.968291   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:43.968367   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:43.968438   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:43.968519   11267 sshutil.go:53] new ssh client: &{IP:192.169.0.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/id_rsa Username:docker}
	I0725 12:20:43.968796   11267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 12:20:43.968830   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:20:43.968919   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:20:43.969004   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:20:43.969087   11267 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:20:43.969175   11267 sshutil.go:53] new ssh client: &{IP:192.169.0.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/id_rsa Username:docker}
	I0725 12:20:43.997636   11267 ssh_runner.go:195] Run: systemctl --version
	I0725 12:20:44.045577   11267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 12:20:44.049849   11267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 12:20:44.049911   11267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 12:20:44.062931   11267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 12:20:44.062945   11267 start.go:495] detecting cgroup driver to use...
	I0725 12:20:44.063049   11267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:20:44.077724   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 12:20:44.085913   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 12:20:44.094053   11267 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 12:20:44.094100   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 12:20:44.102432   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 12:20:44.110724   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 12:20:44.118915   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 12:20:44.127122   11267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 12:20:44.135453   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 12:20:44.143621   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 12:20:44.151852   11267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 12:20:44.160011   11267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 12:20:44.167388   11267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 12:20:44.174815   11267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:20:44.273296   11267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 12:20:44.292138   11267 start.go:495] detecting cgroup driver to use...
	I0725 12:20:44.292215   11267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 12:20:44.313020   11267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 12:20:44.323639   11267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 12:20:44.348754   11267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 12:20:44.361443   11267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:20:44.372526   11267 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 12:20:44.393175   11267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:20:44.404514   11267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:20:44.419223   11267 ssh_runner.go:195] Run: which cri-dockerd
	I0725 12:20:44.422182   11267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 12:20:44.430167   11267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 12:20:44.443799   11267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 12:20:44.551505   11267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 12:20:44.662561   11267 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 12:20:44.662630   11267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 12:20:44.677595   11267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:20:44.768716   11267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 12:21:45.576312   11267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.806568241s)
	I0725 12:21:45.576376   11267 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0725 12:21:45.612523   11267 out.go:177] 
	W0725 12:21:45.633892   11267 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 19:20:42 default-k8s-diff-port-620000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.435842528Z" level=info msg="Starting up"
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.436317476Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.436969725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.452261948Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470872988Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470915999Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470963524Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470998190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471037293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471070240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471216425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471254720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471269740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471280964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471310048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471395721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.472860597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.472900676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473000919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473034659Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473069847Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473088859Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474204997Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474279402Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474293284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474303356Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474312502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474356749Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474560178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474636028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474647385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474656189Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474667133Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474675681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474684831Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474693148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474703231Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474711890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474725594Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474736566Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474750358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474762172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474770560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474779045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474786979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474796858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474804593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474812640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474820497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474829600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474837307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474852215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474862318Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474879982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474893866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474901559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474948486Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474963174Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474970928Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474982017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474990939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474999304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475013452Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475143058Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475245560Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475298143Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475317824Z" level=info msg="containerd successfully booted in 0.023546s"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.454062614Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.478644378Z" level=info msg="Loading containers: start."
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.615196685Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.675451396Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.718794278Z" level=info msg="Loading containers: done."
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.725469324Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.725575539Z" level=info msg="Daemon has completed initialization"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.747493149Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.747594958Z" level=info msg="API listen on [::]:2376"
	Jul 25 19:20:43 default-k8s-diff-port-620000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.706138480Z" level=info msg="Processing signal 'terminated'"
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707066422Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 19:20:44 default-k8s-diff-port-620000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707464770Z" level=info msg="Daemon shutdown complete"
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707503588Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707503183Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:20:45 default-k8s-diff-port-620000 dockerd[930]: time="2024-07-25T19:20:45.737245167Z" level=info msg="Starting up"
	Jul 25 19:21:45 default-k8s-diff-port-620000 dockerd[930]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 25 19:20:42 default-k8s-diff-port-620000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.435842528Z" level=info msg="Starting up"
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.436317476Z" level=info msg="containerd not running, starting managed containerd"
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:42.436969725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.452261948Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470872988Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470915999Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470963524Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.470998190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471037293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471070240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471216425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471254720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471269740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471280964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471310048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.471395721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.472860597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.472900676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473000919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473034659Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473069847Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.473088859Z" level=info msg="metadata content store policy set" policy=shared
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474204997Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474279402Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474293284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474303356Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474312502Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474356749Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474560178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474636028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474647385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474656189Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474667133Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474675681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474684831Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474693148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474703231Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474711890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474725594Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474736566Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474750358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474762172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474770560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474779045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474786979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474796858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474804593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474812640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474820497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474829600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474837307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474852215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474862318Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474879982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474893866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474901559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474948486Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474963174Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474970928Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474982017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474990939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.474999304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475013452Z" level=info msg="NRI interface is disabled by configuration."
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475143058Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475245560Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475298143Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 25 19:20:42 default-k8s-diff-port-620000 dockerd[503]: time="2024-07-25T19:20:42.475317824Z" level=info msg="containerd successfully booted in 0.023546s"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.454062614Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.478644378Z" level=info msg="Loading containers: start."
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.615196685Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.675451396Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.718794278Z" level=info msg="Loading containers: done."
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.725469324Z" level=info msg="Docker daemon" commit=a21b1a2 containerd-snapshotter=false storage-driver=overlay2 version=27.1.0
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.725575539Z" level=info msg="Daemon has completed initialization"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.747493149Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 19:20:43 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:43.747594958Z" level=info msg="API listen on [::]:2376"
	Jul 25 19:20:43 default-k8s-diff-port-620000 systemd[1]: Started Docker Application Container Engine.
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.706138480Z" level=info msg="Processing signal 'terminated'"
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707066422Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 19:20:44 default-k8s-diff-port-620000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707464770Z" level=info msg="Daemon shutdown complete"
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707503588Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 19:20:44 default-k8s-diff-port-620000 dockerd[496]: time="2024-07-25T19:20:44.707503183Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Deactivated successfully.
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 19:20:45 default-k8s-diff-port-620000 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 19:20:45 default-k8s-diff-port-620000 dockerd[930]: time="2024-07-25T19:20:45.737245167Z" level=info msg="Starting up"
	Jul 25 19:21:45 default-k8s-diff-port-620000 dockerd[930]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 25 19:21:45 default-k8s-diff-port-620000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0725 12:21:45.633980   11267 out.go:239] * 
	* 
	W0725 12:21:45.634839   11267 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:21:45.740109   11267 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-620000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (146.853435ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:21:45.944196   11290 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (77.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-620000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (144.940716ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:21:46.090079   11295 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-620000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-620000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-620000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (37.398155ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-620000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-620000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (142.61484ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:21:46.270901   11301 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (59.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-620000 image list --format=json
E0725 12:21:46.610308    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:22:10.123510    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:22:16.454893    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:22:20.266741    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
start_stop_delete_test.go:304: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-diff-port-620000 image list --format=json: (59.6271332s)
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (147.495871ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:22:46.047783   11310 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (59.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-620000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-620000 --alsologtostderr -v=1: exit status 80 (1.756884428s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-620000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:22:46.111751   11315 out.go:291] Setting OutFile to fd 1 ...
	I0725 12:22:46.112534   11315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:22:46.112543   11315 out.go:304] Setting ErrFile to fd 2...
	I0725 12:22:46.112549   11315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 12:22:46.113074   11315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 12:22:46.113409   11315 out.go:298] Setting JSON to false
	I0725 12:22:46.113428   11315 mustload.go:65] Loading cluster: default-k8s-diff-port-620000
	I0725 12:22:46.113680   11315 config.go:182] Loaded profile config "default-k8s-diff-port-620000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 12:22:46.114013   11315 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:22:46.114067   11315 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:22:46.122401   11315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58401
	I0725 12:22:46.122805   11315 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:22:46.123212   11315 main.go:141] libmachine: Using API Version  1
	I0725 12:22:46.123226   11315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:22:46.123443   11315 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:22:46.123569   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetState
	I0725 12:22:46.123665   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 12:22:46.123727   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) DBG | hyperkit pid from json: 11278
	I0725 12:22:46.124701   11315 host.go:66] Checking if "default-k8s-diff-port-620000" exists ...
	I0725 12:22:46.124937   11315 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:22:46.124956   11315 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:22:46.133206   11315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58403
	I0725 12:22:46.133525   11315 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:22:46.133846   11315 main.go:141] libmachine: Using API Version  1
	I0725 12:22:46.133853   11315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:22:46.134070   11315 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:22:46.134193   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:22:46.134848   11315 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1-1721690939-19319/minikube-v1.33.1-1721690939-19319-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-1721690939-19319-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/Users:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-620000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!
s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0725 12:22:46.157022   11315 out.go:177] * Pausing node default-k8s-diff-port-620000 ... 
	I0725 12:22:46.199516   11315 host.go:66] Checking if "default-k8s-diff-port-620000" exists ...
	I0725 12:22:46.200057   11315 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 12:22:46.200115   11315 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 12:22:46.209602   11315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58405
	I0725 12:22:46.209942   11315 main.go:141] libmachine: () Calling .GetVersion
	I0725 12:22:46.210271   11315 main.go:141] libmachine: Using API Version  1
	I0725 12:22:46.210288   11315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 12:22:46.210488   11315 main.go:141] libmachine: () Calling .GetMachineName
	I0725 12:22:46.210609   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .DriverName
	I0725 12:22:46.210784   11315 ssh_runner.go:195] Run: systemctl --version
	I0725 12:22:46.210803   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHHostname
	I0725 12:22:46.210895   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHPort
	I0725 12:22:46.210982   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHKeyPath
	I0725 12:22:46.211060   11315 main.go:141] libmachine: (default-k8s-diff-port-620000) Calling .GetSSHUsername
	I0725 12:22:46.211168   11315 sshutil.go:53] new ssh client: &{IP:192.169.0.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/default-k8s-diff-port-620000/id_rsa Username:docker}
	I0725 12:22:46.242418   11315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:22:46.253760   11315 pause.go:51] kubelet running: false
	I0725 12:22:46.253807   11315 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0725 12:22:46.264869   11315 retry.go:31] will retry after 341.557065ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0725 12:22:46.607012   11315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:22:46.620426   11315 pause.go:51] kubelet running: false
	I0725 12:22:46.620489   11315 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0725 12:22:46.631605   11315 retry.go:31] will retry after 505.82881ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0725 12:22:47.139200   11315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:22:47.153047   11315 pause.go:51] kubelet running: false
	I0725 12:22:47.153108   11315 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0725 12:22:47.164391   11315 retry.go:31] will retry after 478.476516ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0725 12:22:47.643208   11315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:22:47.656600   11315 pause.go:51] kubelet running: false
	I0725 12:22:47.656664   11315 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0725 12:22:47.689262   11315 out.go:177] 
	W0725 12:22:47.709941   11315 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0725 12:22:47.709965   11315 out.go:239] * 
	* 
	W0725 12:22:47.716536   11315 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:22:47.793959   11315 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-620000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (147.275939ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:22:47.951823   11320 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 6 (144.189065ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:22:48.096250   11325 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-620000" does not appear in /Users/jenkins/minikube-integration/19326-1195/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-620000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.05s)

                                                
                                    

Test pass (290/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 7.45
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 10.48
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.97
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 212.69
38 TestAddons/serial/Volcano 38.24
40 TestAddons/serial/GCPAuth/Namespaces 0.1
42 TestAddons/parallel/Registry 13.75
43 TestAddons/parallel/Ingress 21.17
44 TestAddons/parallel/InspektorGadget 10.51
45 TestAddons/parallel/MetricsServer 5.51
46 TestAddons/parallel/HelmTiller 10.36
48 TestAddons/parallel/CSI 57.21
49 TestAddons/parallel/Headlamp 18.39
50 TestAddons/parallel/CloudSpanner 5.4
51 TestAddons/parallel/LocalPath 52.42
52 TestAddons/parallel/NvidiaDevicePlugin 5.37
53 TestAddons/parallel/Yakd 10.45
54 TestAddons/StoppedEnableDisable 5.94
62 TestHyperKitDriverInstallOrUpdate 8.69
65 TestErrorSpam/setup 36.05
66 TestErrorSpam/start 1.28
67 TestErrorSpam/status 0.49
68 TestErrorSpam/pause 1.35
69 TestErrorSpam/unpause 1.36
70 TestErrorSpam/stop 155.79
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 54.12
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 36.96
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
82 TestFunctional/serial/CacheCmd/cache/add_local 1.34
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.04
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.44
90 TestFunctional/serial/ExtraConfig 58.42
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.63
93 TestFunctional/serial/LogsFileCmd 2.85
94 TestFunctional/serial/InvalidService 4.62
96 TestFunctional/parallel/ConfigCmd 0.51
97 TestFunctional/parallel/DashboardCmd 12.27
98 TestFunctional/parallel/DryRun 1.3
99 TestFunctional/parallel/InternationalLanguage 1.29
100 TestFunctional/parallel/StatusCmd 0.53
104 TestFunctional/parallel/ServiceCmdConnect 8.63
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 27.47
108 TestFunctional/parallel/SSHCmd 0.33
109 TestFunctional/parallel/CpCmd 1.03
110 TestFunctional/parallel/MySQL 26.92
111 TestFunctional/parallel/FileSync 0.23
112 TestFunctional/parallel/CertSync 1.03
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
120 TestFunctional/parallel/License 0.45
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.49
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.33
128 TestFunctional/parallel/ImageCommands/Setup 1.7
129 TestFunctional/parallel/DockerEnv/bash 0.61
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.87
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.65
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.25
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.31
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
140 TestFunctional/parallel/ServiceCmd/DeployApp 22.13
141 TestFunctional/parallel/ServiceCmd/List 0.18
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.18
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
144 TestFunctional/parallel/ServiceCmd/Format 0.27
145 TestFunctional/parallel/ServiceCmd/URL 0.27
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.36
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.14
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
158 TestFunctional/parallel/ProfileCmd/profile_list 0.27
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 5.92
161 TestFunctional/parallel/MountCmd/specific-port 1.7
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 205.72
170 TestMultiControlPlane/serial/DeployApp 5.21
171 TestMultiControlPlane/serial/PingHostFromPods 1.27
172 TestMultiControlPlane/serial/AddWorkerNode 49.61
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.33
175 TestMultiControlPlane/serial/CopyFile 9.06
176 TestMultiControlPlane/serial/StopSecondaryNode 8.69
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
178 TestMultiControlPlane/serial/RestartSecondaryNode 38.9
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.24
183 TestMultiControlPlane/serial/StopCluster 24.92
184 TestMultiControlPlane/serial/RestartCluster 100.97
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.25
186 TestMultiControlPlane/serial/AddSecondaryNode 75.17
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.33
190 TestImageBuild/serial/Setup 38.24
191 TestImageBuild/serial/NormalBuild 1.45
192 TestImageBuild/serial/BuildWithBuildArg 0.73
193 TestImageBuild/serial/BuildWithDockerIgnore 0.59
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.61
198 TestJSONOutput/start/Command 91
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.46
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.44
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.35
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 90.89
233 TestMultiNode/serial/FreshStart2Nodes 110.78
234 TestMultiNode/serial/DeployApp2Nodes 4.37
235 TestMultiNode/serial/PingHostFrom2Pods 0.9
236 TestMultiNode/serial/AddNode 45.54
237 TestMultiNode/serial/MultiNodeLabels 0.05
238 TestMultiNode/serial/ProfileList 0.17
239 TestMultiNode/serial/CopyFile 5.16
240 TestMultiNode/serial/StopNode 2.82
241 TestMultiNode/serial/StartAfterStop 41.66
242 TestMultiNode/serial/RestartKeepsNodes 207.74
243 TestMultiNode/serial/DeleteNode 3.34
244 TestMultiNode/serial/StopMultiNode 16.75
245 TestMultiNode/serial/RestartMultiNode 178.64
246 TestMultiNode/serial/ValidateNameConflict 44.11
250 TestPreload 169
253 TestSkaffold 112.63
256 TestRunningBinaryUpgrade 90.39
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.49
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.86
273 TestStoppedBinaryUpgrade/Setup 1.33
274 TestStoppedBinaryUpgrade/Upgrade 680.57
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.6
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.66
287 TestNoKubernetes/serial/StartWithK8s 75.11
288 TestNetworkPlugins/group/auto/Start 65.92
289 TestNoKubernetes/serial/StartWithStopK8s 8.6
290 TestNoKubernetes/serial/Start 22.12
291 TestNetworkPlugins/group/auto/KubeletFlags 0.15
292 TestNetworkPlugins/group/auto/NetCatPod 11.15
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
294 TestNoKubernetes/serial/ProfileList 0.47
295 TestNoKubernetes/serial/Stop 2.37
296 TestNoKubernetes/serial/StartNoArgs 19.65
297 TestNetworkPlugins/group/auto/DNS 0.13
298 TestNetworkPlugins/group/auto/Localhost 0.11
299 TestNetworkPlugins/group/auto/HairPin 0.11
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.14
301 TestNetworkPlugins/group/kindnet/Start 71.17
302 TestNetworkPlugins/group/flannel/Start 74.14
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/flannel/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.15
306 TestNetworkPlugins/group/kindnet/NetCatPod 12.13
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
308 TestNetworkPlugins/group/flannel/NetCatPod 10.14
309 TestNetworkPlugins/group/kindnet/DNS 0.12
310 TestNetworkPlugins/group/kindnet/Localhost 0.11
311 TestNetworkPlugins/group/kindnet/HairPin 0.1
312 TestNetworkPlugins/group/flannel/DNS 0.13
313 TestNetworkPlugins/group/flannel/Localhost 0.1
314 TestNetworkPlugins/group/flannel/HairPin 0.1
316 TestNetworkPlugins/group/bridge/Start 100.54
317 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
318 TestNetworkPlugins/group/bridge/NetCatPod 12.13
319 TestNetworkPlugins/group/bridge/DNS 0.12
320 TestNetworkPlugins/group/bridge/Localhost 0.1
321 TestNetworkPlugins/group/bridge/HairPin 0.1
322 TestNetworkPlugins/group/kubenet/Start 53.52
323 TestNetworkPlugins/group/custom-flannel/Start 65.4
324 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
325 TestNetworkPlugins/group/kubenet/NetCatPod 12.13
326 TestNetworkPlugins/group/kubenet/DNS 0.13
327 TestNetworkPlugins/group/kubenet/Localhost 0.11
328 TestNetworkPlugins/group/kubenet/HairPin 0.1
329 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
330 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.14
331 TestNetworkPlugins/group/calico/Start 81.87
332 TestNetworkPlugins/group/custom-flannel/DNS 0.13
333 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
334 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
335 TestNetworkPlugins/group/false/Start 53.82
336 TestNetworkPlugins/group/false/KubeletFlags 0.17
337 TestNetworkPlugins/group/false/NetCatPod 10.14
338 TestNetworkPlugins/group/calico/ControllerPod 6.01
339 TestNetworkPlugins/group/false/DNS 0.12
340 TestNetworkPlugins/group/false/Localhost 0.11
341 TestNetworkPlugins/group/false/HairPin 0.1
342 TestNetworkPlugins/group/calico/KubeletFlags 0.15
343 TestNetworkPlugins/group/calico/NetCatPod 11.13
344 TestNetworkPlugins/group/calico/DNS 0.12
345 TestNetworkPlugins/group/calico/Localhost 0.1
346 TestNetworkPlugins/group/calico/HairPin 0.1
348 TestStartStop/group/old-k8s-version/serial/FirstStart 147.47
350 TestStartStop/group/no-preload/serial/FirstStart 57.2
351 TestStartStop/group/no-preload/serial/DeployApp 7.2
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
353 TestStartStop/group/no-preload/serial/Stop 8.43
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
355 TestStartStop/group/no-preload/serial/SecondStart 382.82
356 TestStartStop/group/old-k8s-version/serial/DeployApp 9.35
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.71
358 TestStartStop/group/old-k8s-version/serial/Stop 8.4
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
360 TestStartStop/group/old-k8s-version/serial/SecondStart 401.23
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
363 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
364 TestStartStop/group/no-preload/serial/Pause 1.93
366 TestStartStop/group/embed-certs/serial/FirstStart 90.84
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.15
370 TestStartStop/group/old-k8s-version/serial/Pause 1.89
371 TestStartStop/group/embed-certs/serial/DeployApp 9.2
373 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 166.43
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
375 TestStartStop/group/embed-certs/serial/Stop 8.41
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
377 TestStartStop/group/embed-certs/serial/SecondStart 313.51
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.22
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.74
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.43
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
387 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
389 TestStartStop/group/newest-cni/serial/FirstStart 41.6
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
392 TestStartStop/group/embed-certs/serial/Pause 1.94
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
395 TestStartStop/group/newest-cni/serial/Stop 8.46
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
397 TestStartStop/group/newest-cni/serial/SecondStart 51.34
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
401 TestStartStop/group/newest-cni/serial/Pause 1.74
x
+
TestDownloadOnly/v1.20.0/json-events (22.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-218000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-218000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (22.632105743s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-218000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-218000: exit status 85 (297.966561ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-218000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |          |
	|         | -p download-only-218000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:28:02
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:28:02.939198    1734 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:28:02.939399    1734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:02.939405    1734 out.go:304] Setting ErrFile to fd 2...
	I0725 10:28:02.939409    1734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:02.939579    1734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	W0725 10:28:02.939687    1734 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19326-1195/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19326-1195/.minikube/config/config.json: no such file or directory
	I0725 10:28:02.941462    1734 out.go:298] Setting JSON to true
	I0725 10:28:02.963846    1734 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1652,"bootTime":1721926830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:28:02.963923    1734 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:28:02.984610    1734 out.go:97] [download-only-218000] minikube v1.33.1 on Darwin 14.5
	I0725 10:28:02.984854    1734 notify.go:220] Checking for updates...
	W0725 10:28:02.984844    1734 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 10:28:03.006606    1734 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:28:03.029548    1734 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:28:03.051410    1734 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:28:03.072474    1734 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:28:03.093669    1734 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	W0725 10:28:03.136475    1734 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:28:03.136952    1734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:28:03.187411    1734 out.go:97] Using the hyperkit driver based on user configuration
	I0725 10:28:03.187492    1734 start.go:297] selected driver: hyperkit
	I0725 10:28:03.187507    1734 start.go:901] validating driver "hyperkit" against <nil>
	I0725 10:28:03.187710    1734 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:03.188074    1734 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:28:03.595867    1734 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:28:03.600579    1734 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:28:03.600600    1734 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:28:03.600634    1734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:28:03.605056    1734 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0725 10:28:03.605718    1734 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:28:03.605745    1734 cni.go:84] Creating CNI manager for ""
	I0725 10:28:03.605760    1734 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 10:28:03.605836    1734 start.go:340] cluster config:
	{Name:download-only-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:28:03.606064    1734 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:03.627070    1734 out.go:97] Downloading VM boot image ...
	I0725 10:28:03.627186    1734 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 10:28:14.920484    1734 out.go:97] Starting "download-only-218000" primary control-plane node in "download-only-218000" cluster
	I0725 10:28:14.920523    1734 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:14.974800    1734 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0725 10:28:14.974835    1734 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:14.975190    1734 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:14.995617    1734 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0725 10:28:14.995644    1734 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 10:28:15.077053    1734 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-218000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-218000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-218000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-963000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-963000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (7.447794652s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-963000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-963000: exit status 85 (294.83715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-218000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-218000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-218000        | download-only-218000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only        | download-only-963000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-963000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:28:26
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:28:26.311459    1759 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:28:26.311631    1759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:26.311636    1759 out.go:304] Setting ErrFile to fd 2...
	I0725 10:28:26.311640    1759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:26.311832    1759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:28:26.313345    1759 out.go:298] Setting JSON to true
	I0725 10:28:26.335454    1759 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1676,"bootTime":1721926830,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:28:26.335577    1759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:28:26.357821    1759 out.go:97] [download-only-963000] minikube v1.33.1 on Darwin 14.5
	I0725 10:28:26.358019    1759 notify.go:220] Checking for updates...
	I0725 10:28:26.379678    1759 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:28:26.401292    1759 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:28:26.422726    1759 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:28:26.444490    1759 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:28:26.465517    1759 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	W0725 10:28:26.507286    1759 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:28:26.507560    1759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:28:26.536335    1759 out.go:97] Using the hyperkit driver based on user configuration
	I0725 10:28:26.536388    1759 start.go:297] selected driver: hyperkit
	I0725 10:28:26.536406    1759 start.go:901] validating driver "hyperkit" against <nil>
	I0725 10:28:26.536639    1759 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:26.536878    1759 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:28:26.546632    1759 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:28:26.550570    1759 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:28:26.550590    1759 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:28:26.550615    1759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:28:26.553282    1759 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0725 10:28:26.553429    1759 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:28:26.553453    1759 cni.go:84] Creating CNI manager for ""
	I0725 10:28:26.553466    1759 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 10:28:26.553472    1759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 10:28:26.553548    1759 start.go:340] cluster config:
	{Name:download-only-963000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:28:26.553635    1759 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:26.574559    1759 out.go:97] Starting "download-only-963000" primary control-plane node in "download-only-963000" cluster
	I0725 10:28:26.574597    1759 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:28:26.642341    1759 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0725 10:28:26.642370    1759 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:26.642807    1759 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:28:26.664702    1759 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0725 10:28:26.664729    1759 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0725 10:28:26.746322    1759 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-963000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-963000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (10.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-927000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-927000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (10.482575705s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (10.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-927000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-927000: exit status 85 (290.814178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-218000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-218000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-218000             | download-only-218000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only             | download-only-963000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-963000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-963000             | download-only-963000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only             | download-only-927000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-927000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:28:34
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:28:34.492356    1784 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:28:34.493024    1784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:34.493033    1784 out.go:304] Setting ErrFile to fd 2...
	I0725 10:28:34.493039    1784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:34.493611    1784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:28:34.495123    1784 out.go:298] Setting JSON to true
	I0725 10:28:34.517274    1784 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1684,"bootTime":1721926830,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:28:34.517360    1784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:28:34.540704    1784 out.go:97] [download-only-927000] minikube v1.33.1 on Darwin 14.5
	I0725 10:28:34.540940    1784 notify.go:220] Checking for updates...
	I0725 10:28:34.562450    1784 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:28:34.584264    1784 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:28:34.605526    1784 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:28:34.627463    1784 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:28:34.650256    1784 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	W0725 10:28:34.692150    1784 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:28:34.692570    1784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:28:34.721924    1784 out.go:97] Using the hyperkit driver based on user configuration
	I0725 10:28:34.722070    1784 start.go:297] selected driver: hyperkit
	I0725 10:28:34.722084    1784 start.go:901] validating driver "hyperkit" against <nil>
	I0725 10:28:34.722285    1784 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:34.722514    1784 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19326-1195/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0725 10:28:34.732739    1784 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0725 10:28:34.736905    1784 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:28:34.736926    1784 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0725 10:28:34.736955    1784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:28:34.739774    1784 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0725 10:28:34.739967    1784 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:28:34.740017    1784 cni.go:84] Creating CNI manager for ""
	I0725 10:28:34.740032    1784 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 10:28:34.740044    1784 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 10:28:34.740107    1784 start.go:340] cluster config:
	{Name:download-only-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:28:34.740189    1784 iso.go:125] acquiring lock: {Name:mkb263b4f2df5562900a9272313895aaec1f85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:34.761426    1784 out.go:97] Starting "download-only-927000" primary control-plane node in "download-only-927000" cluster
	I0725 10:28:34.761464    1784 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:34.817632    1784 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0725 10:28:34.817691    1784 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:34.818173    1784 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:34.840207    1784 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0725 10:28:34.840250    1784 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 10:28:34.921121    1784 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0725 10:28:42.939802    1784 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 10:28:42.939997    1784 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 10:28:43.404954    1784 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 10:28:43.405212    1784 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/download-only-927000/config.json ...
	I0725 10:28:43.405236    1784 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/download-only-927000/config.json: {Name:mk25bd6ee9ffc318d8e86cc26409af084092a681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:28:43.405549    1784 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:43.405762    1784 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19326-1195/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-927000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-927000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-927000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.97s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-262000 --alsologtostderr --binary-mirror http://127.0.0.1:49662 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-262000
--- PASS: TestBinaryMirror (0.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-545000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-545000: exit status 85 (188.270066ms)

                                                
                                                
-- stdout --
	* Profile "addons-545000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-545000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-545000: exit status 85 (209.276882ms)

                                                
                                                
-- stdout --
	* Profile "addons-545000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (212.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-545000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-545000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.689154361s)
--- PASS: TestAddons/Setup (212.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.560785ms
addons_test.go:897: volcano-scheduler stabilized in 11.602897ms
addons_test.go:905: volcano-admission stabilized in 11.639034ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-zdrcw" [bd49a4ad-b40b-41cf-bdd5-dbbc7e79b177] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004059089s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-rt5xh" [af41696a-4ae9-48ab-821c-4f9e8d927a3a] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004855385s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-l4dt9" [ab2b49b5-451e-4ec5-8703-1584326da384] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00405492s
addons_test.go:932: (dbg) Run:  kubectl --context addons-545000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-545000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-545000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [17b7ee77-368b-486f-8c85-aa64269e55a2] Pending
helpers_test.go:344: "test-job-nginx-0" [17b7ee77-368b-486f-8c85-aa64269e55a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [17b7ee77-368b-486f-8c85-aa64269e55a2] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003351163s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable volcano --alsologtostderr -v=1: (9.963914923s)
--- PASS: TestAddons/serial/Volcano (38.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-545000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-545000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.622628ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-xp8dv" [155bbbca-5eb5-4067-9566-ae14e629a679] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004293083s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dw2r9" [e8a47a4a-6d15-4368-9316-b7ca9c3ef6c0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005511508s
addons_test.go:342: (dbg) Run:  kubectl --context addons-545000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-545000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-545000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.122994243s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 ip
2024/07/25 10:33:29 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-545000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-545000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-545000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [68f48ee6-f98c-40f0-83cc-7c47fd7cecec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [68f48ee6-f98c-40f0-83cc-7c47fd7cecec] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005473492s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-545000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable ingress-dns --alsologtostderr -v=1: (1.791655741s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable ingress --alsologtostderr -v=1: (7.473766809s)
--- PASS: TestAddons/parallel/Ingress (21.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-564kx" [d95ff880-dc07-4d2d-b508-b06a8efb9671] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006077341s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-545000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-545000: (5.505341081s)
--- PASS: TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.277659ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-vhwzr" [f8104d50-5c48-4a0b-a1ad-81ba09e6cf1f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006119803s
addons_test.go:417: (dbg) Run:  kubectl --context addons-545000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.36s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.844405ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-5j65j" [acff7063-cbd4-4fa3-b280-08e93d0b419d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004181592s
addons_test.go:475: (dbg) Run:  kubectl --context addons-545000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-545000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.942608319s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.404503ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-545000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-545000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [01c455d5-dc79-4954-9f70-3a295ad2ff66] Pending
helpers_test.go:344: "task-pv-pod" [01c455d5-dc79-4954-9f70-3a295ad2ff66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [01c455d5-dc79-4954-9f70-3a295ad2ff66] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.005197018s
addons_test.go:590: (dbg) Run:  kubectl --context addons-545000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-545000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-545000 delete pod task-pv-pod: (1.039004332s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-545000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-545000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-545000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3b84d961-4070-4fc0-a133-9b1a9c99ee11] Pending
helpers_test.go:344: "task-pv-pod-restore" [3b84d961-4070-4fc0-a133-9b1a9c99ee11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3b84d961-4070-4fc0-a133-9b1a9c99ee11] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003924805s
addons_test.go:632: (dbg) Run:  kubectl --context addons-545000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-545000 delete pod task-pv-pod-restore: (1.221049041s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-545000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-545000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.395621141s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-545000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-v6dn2" [83399c4f-a697-4f0c-82aa-19fc29950f24] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-v6dn2" [83399c4f-a697-4f0c-82aa-19fc29950f24] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004981459s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable headlamp --alsologtostderr -v=1: (5.440337893s)
--- PASS: TestAddons/parallel/Headlamp (18.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-c9cw7" [70b99b2c-7f15-4b0f-851b-3552b95a0bb9] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003879607s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-545000
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-545000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-545000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [950dc58b-42b8-411a-834f-a7d8f135c53f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [950dc58b-42b8-411a-834f-a7d8f135c53f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [950dc58b-42b8-411a-834f-a7d8f135c53f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005452288s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-545000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 ssh "cat /opt/local-path-provisioner/pvc-debea71b-845c-443e-bb17-e48dbef8d9ee_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-545000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-545000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.780921616s)
--- PASS: TestAddons/parallel/LocalPath (52.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f6fkq" [fb8a065d-3ca3-4ce7-b373-9cdcc9080273] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005307368s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-545000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-v5g66" [020146a5-08db-4a75-9d69-e12864d85eb5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004033493s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-545000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-545000 addons disable yakd --alsologtostderr -v=1: (5.444174128s)
--- PASS: TestAddons/parallel/Yakd (10.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.94s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-545000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-545000: (5.381385091s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-545000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-545000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-545000
--- PASS: TestAddons/StoppedEnableDisable (5.94s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.69s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.69s)

                                                
                                    
x
+
TestErrorSpam/setup (36.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-353000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-353000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 --driver=hyperkit : (36.045337088s)
--- PASS: TestErrorSpam/setup (36.05s)

                                                
                                    
x
+
TestErrorSpam/start (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 start --dry-run
--- PASS: TestErrorSpam/start (1.28s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (155.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop: (5.375224765s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop: (1m15.204289689s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop
E0725 10:37:20.154397    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.161896    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.172998    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.193973    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.236232    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.316620    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.478869    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:20.800796    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:21.441370    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:22.723168    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:25.285520    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:30.406220    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:37:40.647195    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:38:01.127818    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-353000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-353000 stop: (1m15.204591358s)
--- PASS: TestErrorSpam/stop (155.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19326-1195/.minikube/files/etc/test/nested/copy/1732/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0725 10:38:42.089237    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-402000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (54.118541548s)
--- PASS: TestFunctional/serial/StartWithProxy (54.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --alsologtostderr -v=8
E0725 10:40:04.011005    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-402000 --alsologtostderr -v=8: (36.958936541s)
functional_test.go:659: soft start took 36.959466319s for "functional-402000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-402000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-402000 cache add registry.k8s.io/pause:3.1: (1.124255216s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3320683932/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache add minikube-local-cache-test:functional-402000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache delete minikube-local-cache-test:functional-402000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-402000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (142.956863ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 kubectl -- --context functional-402000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-402000 kubectl -- --context functional-402000 get pods: (1.13029795s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-402000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-402000 get pods: (1.443030805s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-402000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.415309059s)
functional_test.go:757: restart took 58.415422909s for "functional-402000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (58.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-402000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-402000 logs: (2.630069914s)
--- PASS: TestFunctional/serial/LogsCmd (2.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1404903851/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-402000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1404903851/001/logs.txt: (2.851753489s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-402000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-402000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-402000: exit status 115 (267.782553ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30970 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-402000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-402000 delete -f testdata/invalidsvc.yaml: (1.217877537s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 config get cpus: exit status 14 (75.487553ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 config get cpus: exit status 14 (55.286526ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-402000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-402000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3213: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-402000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (620.863348ms)

                                                
                                                
-- stdout --
	* [functional-402000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:42:19.261951    3124 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:42:19.262350    3124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:42:19.262360    3124 out.go:304] Setting ErrFile to fd 2...
	I0725 10:42:19.262368    3124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:42:19.262704    3124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:42:19.264968    3124 out.go:298] Setting JSON to false
	I0725 10:42:19.288258    3124 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2509,"bootTime":1721926830,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:42:19.288378    3124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:42:19.310507    3124 out.go:177] * [functional-402000] minikube v1.33.1 on Darwin 14.5
	I0725 10:42:19.352574    3124 notify.go:220] Checking for updates...
	I0725 10:42:19.374399    3124 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:42:19.416298    3124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:42:19.458168    3124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:42:19.500350    3124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:42:19.542238    3124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 10:42:19.584102    3124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:42:19.606228    3124 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:42:19.606931    3124 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:42:19.607041    3124 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:42:19.616666    3124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50863
	I0725 10:42:19.617043    3124 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:42:19.617476    3124 main.go:141] libmachine: Using API Version  1
	I0725 10:42:19.617510    3124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:42:19.617730    3124 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:42:19.617909    3124 main.go:141] libmachine: (functional-402000) Calling .DriverName
	I0725 10:42:19.618123    3124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:42:19.618381    3124 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:42:19.618407    3124 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:42:19.627012    3124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50865
	I0725 10:42:19.627389    3124 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:42:19.627762    3124 main.go:141] libmachine: Using API Version  1
	I0725 10:42:19.627783    3124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:42:19.627989    3124 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:42:19.628089    3124 main.go:141] libmachine: (functional-402000) Calling .DriverName
	I0725 10:42:19.657261    3124 out.go:177] * Using the hyperkit driver based on existing profile
	I0725 10:42:19.699394    3124 start.go:297] selected driver: hyperkit
	I0725 10:42:19.699424    3124 start.go:901] validating driver "hyperkit" against &{Name:functional-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:42:19.699614    3124 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:42:19.725246    3124 out.go:177] 
	W0725 10:42:19.746255    3124 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 10:42:19.767470    3124 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-402000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-402000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (1.28853915s)

                                                
                                                
-- stdout --
	* [functional-402000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:42:20.556149    3170 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:42:20.576806    3170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:42:20.576820    3170 out.go:304] Setting ErrFile to fd 2...
	I0725 10:42:20.576828    3170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:42:20.577143    3170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:42:20.640125    3170 out.go:298] Setting JSON to false
	I0725 10:42:20.664179    3170 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2510,"bootTime":1721926830,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0725 10:42:20.664258    3170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:42:20.786406    3170 out.go:177] * [functional-402000] minikube v1.33.1 sur Darwin 14.5
	I0725 10:42:20.849323    3170 notify.go:220] Checking for updates...
	I0725 10:42:20.891355    3170 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:42:21.012495    3170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	I0725 10:42:21.149248    3170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 10:42:21.254305    3170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:42:21.338239    3170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	I0725 10:42:21.401153    3170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:42:21.465022    3170 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:42:21.465638    3170 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:42:21.465709    3170 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:42:21.475026    3170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50912
	I0725 10:42:21.475391    3170 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:42:21.475834    3170 main.go:141] libmachine: Using API Version  1
	I0725 10:42:21.475844    3170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:42:21.476078    3170 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:42:21.476206    3170 main.go:141] libmachine: (functional-402000) Calling .DriverName
	I0725 10:42:21.476388    3170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:42:21.476659    3170 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:42:21.476682    3170 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:42:21.484961    3170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50914
	I0725 10:42:21.485344    3170 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:42:21.485822    3170 main.go:141] libmachine: Using API Version  1
	I0725 10:42:21.485842    3170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:42:21.486041    3170 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:42:21.486163    3170 main.go:141] libmachine: (functional-402000) Calling .DriverName
	I0725 10:42:21.590363    3170 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0725 10:42:21.653230    3170 start.go:297] selected driver: hyperkit
	I0725 10:42:21.653258    3170 start.go:901] validating driver "hyperkit" against &{Name:functional-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:42:21.653450    3170 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:42:21.696348    3170 out.go:177] 
	W0725 10:42:21.717375    3170 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 10:42:21.738544    3170 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-402000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-402000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-h2tjl" [77e736b8-7907-4ac3-a519-7efd5c6b02a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-h2tjl" [77e736b8-7907-4ac3-a519-7efd5c6b02a5] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005524836s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:31000
functional_test.go:1671: http://192.169.0.4:31000: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-h2tjl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31000
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8d5e79b7-d157-4310-9535-00370299953b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006154753s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-402000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-402000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-402000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-402000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f5cb454d-a535-47c4-95ab-4952848fe41a] Pending
helpers_test.go:344: "sp-pod" [f5cb454d-a535-47c4-95ab-4952848fe41a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f5cb454d-a535-47c4-95ab-4952848fe41a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004175838s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-402000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-402000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-402000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [346ca3cd-9adf-4c8f-a180-3199d0e5f257] Pending
helpers_test.go:344: "sp-pod" [346ca3cd-9adf-4c8f-a180-3199d0e5f257] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [346ca3cd-9adf-4c8f-a180-3199d0e5f257] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003302823s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-402000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh -n functional-402000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cp functional-402000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3824525300/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh -n functional-402000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh -n functional-402000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-402000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-gjdcn" [75f0c542-2d64-42d3-9268-50971b977e98] Pending
helpers_test.go:344: "mysql-64454c8b5c-gjdcn" [75f0c542-2d64-42d3-9268-50971b977e98] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-gjdcn" [75f0c542-2d64-42d3-9268-50971b977e98] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.002995259s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;": exit status 1 (162.755804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;": exit status 1 (104.598754ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;": exit status 1 (102.003362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-402000 exec mysql-64454c8b5c-gjdcn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1732/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /etc/test/nested/copy/1732/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1732.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /etc/ssl/certs/1732.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1732.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /usr/share/ca-certificates/1732.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /etc/ssl/certs/17322.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /usr/share/ca-certificates/17322.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-402000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh "sudo systemctl is-active crio": exit status 1 (156.709388ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-402000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-402000
docker.io/kicbase/echo-server:functional-402000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-402000 image ls --format short --alsologtostderr:
I0725 10:42:23.487305    3220 out.go:291] Setting OutFile to fd 1 ...
I0725 10:42:23.487598    3220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.487604    3220 out.go:304] Setting ErrFile to fd 2...
I0725 10:42:23.487607    3220 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.487780    3220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
I0725 10:42:23.488354    3220 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.488452    3220 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.488798    3220 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.488844    3220 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.497109    3220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50979
I0725 10:42:23.497556    3220 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.497982    3220 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.497992    3220 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.498239    3220 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.498353    3220 main.go:141] libmachine: (functional-402000) Calling .GetState
I0725 10:42:23.498518    3220 main.go:141] libmachine: (functional-402000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 10:42:23.498587    3220 main.go:141] libmachine: (functional-402000) DBG | hyperkit pid from json: 2518
I0725 10:42:23.499845    3220 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.499871    3220 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.508097    3220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50981
I0725 10:42:23.508445    3220 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.508791    3220 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.508803    3220 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.509053    3220 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.509173    3220 main.go:141] libmachine: (functional-402000) Calling .DriverName
I0725 10:42:23.509335    3220 ssh_runner.go:195] Run: systemctl --version
I0725 10:42:23.509354    3220 main.go:141] libmachine: (functional-402000) Calling .GetSSHHostname
I0725 10:42:23.509437    3220 main.go:141] libmachine: (functional-402000) Calling .GetSSHPort
I0725 10:42:23.509511    3220 main.go:141] libmachine: (functional-402000) Calling .GetSSHKeyPath
I0725 10:42:23.509585    3220 main.go:141] libmachine: (functional-402000) Calling .GetSSHUsername
I0725 10:42:23.509670    3220 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/functional-402000/id_rsa Username:docker}
I0725 10:42:23.544621    3220 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0725 10:42:23.559813    3220 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.559821    3220 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.559985    3220 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.559994    3220 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:23.560002    3220 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.560007    3220 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.560140    3220 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.560148    3220 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:23.560161    3220 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-402000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-402000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-402000 | 1ca2cf12ccf87 | 30B    |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-402000 image ls --format table --alsologtostderr:
I0725 10:42:23.791989    3228 out.go:291] Setting OutFile to fd 1 ...
I0725 10:42:23.792182    3228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.792187    3228 out.go:304] Setting ErrFile to fd 2...
I0725 10:42:23.792191    3228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.792365    3228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
I0725 10:42:23.792939    3228 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.793031    3228 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.793376    3228 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.793433    3228 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.801753    3228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50990
I0725 10:42:23.802230    3228 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.802642    3228 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.802651    3228 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.802884    3228 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.803003    3228 main.go:141] libmachine: (functional-402000) Calling .GetState
I0725 10:42:23.803108    3228 main.go:141] libmachine: (functional-402000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 10:42:23.803155    3228 main.go:141] libmachine: (functional-402000) DBG | hyperkit pid from json: 2518
I0725 10:42:23.804431    3228 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.804457    3228 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.812948    3228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50992
I0725 10:42:23.813309    3228 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.813613    3228 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.813625    3228 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.813840    3228 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.813971    3228 main.go:141] libmachine: (functional-402000) Calling .DriverName
I0725 10:42:23.814117    3228 ssh_runner.go:195] Run: systemctl --version
I0725 10:42:23.814137    3228 main.go:141] libmachine: (functional-402000) Calling .GetSSHHostname
I0725 10:42:23.814227    3228 main.go:141] libmachine: (functional-402000) Calling .GetSSHPort
I0725 10:42:23.814299    3228 main.go:141] libmachine: (functional-402000) Calling .GetSSHKeyPath
I0725 10:42:23.814403    3228 main.go:141] libmachine: (functional-402000) Calling .GetSSHUsername
I0725 10:42:23.814512    3228 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/functional-402000/id_rsa Username:docker}
I0725 10:42:23.847062    3228 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0725 10:42:23.863948    3228 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.863957    3228 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.864104    3228 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.864105    3228 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:23.864116    3228 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:23.864126    3228 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.864131    3228 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.864241    3228 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.864251    3228 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:23.864274    3228 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-402000 image ls --format json --alsologtostderr:
[{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a",
"repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"1ca2cf12ccf87dab2714deac3942b10efa5c0b9432b699e7b6c24a7208b2bf7a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-402000"],"size":"30"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5
.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-402000"],"size":"4940000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-402000 image ls --format json --alsologtostderr:
I0725 10:42:23.639394    3224 out.go:291] Setting OutFile to fd 1 ...
I0725 10:42:23.639569    3224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.639574    3224 out.go:304] Setting ErrFile to fd 2...
I0725 10:42:23.639578    3224 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.639762    3224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
I0725 10:42:23.640341    3224 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.640431    3224 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.640765    3224 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.640815    3224 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.649342    3224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50985
I0725 10:42:23.649751    3224 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.650162    3224 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.650183    3224 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.650431    3224 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.650546    3224 main.go:141] libmachine: (functional-402000) Calling .GetState
I0725 10:42:23.650648    3224 main.go:141] libmachine: (functional-402000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 10:42:23.650721    3224 main.go:141] libmachine: (functional-402000) DBG | hyperkit pid from json: 2518
I0725 10:42:23.652004    3224 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.652026    3224 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.660293    3224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50987
I0725 10:42:23.660632    3224 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.660954    3224 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.660965    3224 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.661180    3224 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.661282    3224 main.go:141] libmachine: (functional-402000) Calling .DriverName
I0725 10:42:23.661441    3224 ssh_runner.go:195] Run: systemctl --version
I0725 10:42:23.661461    3224 main.go:141] libmachine: (functional-402000) Calling .GetSSHHostname
I0725 10:42:23.661554    3224 main.go:141] libmachine: (functional-402000) Calling .GetSSHPort
I0725 10:42:23.661642    3224 main.go:141] libmachine: (functional-402000) Calling .GetSSHKeyPath
I0725 10:42:23.661727    3224 main.go:141] libmachine: (functional-402000) Calling .GetSSHUsername
I0725 10:42:23.661841    3224 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/functional-402000/id_rsa Username:docker}
I0725 10:42:23.696279    3224 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0725 10:42:23.712707    3224 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.712715    3224 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.712856    3224 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:23.712878    3224 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.712887    3224 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:23.712895    3224 main.go:141] libmachine: Making call to close driver server
I0725 10:42:23.712900    3224 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:23.713015    3224 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:23.713025    3224 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:23.713040    3224 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-402000 image ls --format yaml --alsologtostderr:
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-402000
size: "4940000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 1ca2cf12ccf87dab2714deac3942b10efa5c0b9432b699e7b6c24a7208b2bf7a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-402000
size: "30"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-402000 image ls --format yaml --alsologtostderr:
I0725 10:42:23.941937    3232 out.go:291] Setting OutFile to fd 1 ...
I0725 10:42:23.942758    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.942767    3232 out.go:304] Setting ErrFile to fd 2...
I0725 10:42:23.942774    3232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:23.943325    3232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
I0725 10:42:23.943915    3232 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.944005    3232 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:23.944333    3232 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.944378    3232 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.952891    3232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50996
I0725 10:42:23.953313    3232 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.953717    3232 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.953728    3232 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.953932    3232 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.954033    3232 main.go:141] libmachine: (functional-402000) Calling .GetState
I0725 10:42:23.954115    3232 main.go:141] libmachine: (functional-402000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 10:42:23.954204    3232 main.go:141] libmachine: (functional-402000) DBG | hyperkit pid from json: 2518
I0725 10:42:23.955454    3232 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:23.955480    3232 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:23.963869    3232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50998
I0725 10:42:23.964214    3232 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:23.964650    3232 main.go:141] libmachine: Using API Version  1
I0725 10:42:23.964659    3232 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:23.964899    3232 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:23.965033    3232 main.go:141] libmachine: (functional-402000) Calling .DriverName
I0725 10:42:23.965197    3232 ssh_runner.go:195] Run: systemctl --version
I0725 10:42:23.965217    3232 main.go:141] libmachine: (functional-402000) Calling .GetSSHHostname
I0725 10:42:23.965306    3232 main.go:141] libmachine: (functional-402000) Calling .GetSSHPort
I0725 10:42:23.965384    3232 main.go:141] libmachine: (functional-402000) Calling .GetSSHKeyPath
I0725 10:42:23.965493    3232 main.go:141] libmachine: (functional-402000) Calling .GetSSHUsername
I0725 10:42:23.965586    3232 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/functional-402000/id_rsa Username:docker}
I0725 10:42:23.999589    3232 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0725 10:42:24.016771    3232 main.go:141] libmachine: Making call to close driver server
I0725 10:42:24.016781    3232 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:24.016951    3232 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:24.016966    3232 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:24.016964    3232 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:24.016980    3232 main.go:141] libmachine: Making call to close driver server
I0725 10:42:24.016987    3232 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:24.017146    3232 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:24.017207    3232 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:24.017227    3232 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh pgrep buildkitd: exit status 1 (125.564405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image build -t localhost/my-image:functional-402000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-402000 image build -t localhost/my-image:functional-402000 testdata/build --alsologtostderr: (2.046648217s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-402000 image build -t localhost/my-image:functional-402000 testdata/build --alsologtostderr:
I0725 10:42:24.222825    3241 out.go:291] Setting OutFile to fd 1 ...
I0725 10:42:24.223185    3241 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:24.223191    3241 out.go:304] Setting ErrFile to fd 2...
I0725 10:42:24.223195    3241 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:42:24.223362    3241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
I0725 10:42:24.223980    3241 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:24.224606    3241 config.go:182] Loaded profile config "functional-402000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:42:24.224964    3241 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:24.225007    3241 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:24.233773    3241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51009
I0725 10:42:24.234192    3241 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:24.234611    3241 main.go:141] libmachine: Using API Version  1
I0725 10:42:24.234622    3241 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:24.234824    3241 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:24.234934    3241 main.go:141] libmachine: (functional-402000) Calling .GetState
I0725 10:42:24.235021    3241 main.go:141] libmachine: (functional-402000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 10:42:24.235103    3241 main.go:141] libmachine: (functional-402000) DBG | hyperkit pid from json: 2518
I0725 10:42:24.236393    3241 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 10:42:24.236420    3241 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0725 10:42:24.244944    3241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51011
I0725 10:42:24.245302    3241 main.go:141] libmachine: () Calling .GetVersion
I0725 10:42:24.245649    3241 main.go:141] libmachine: Using API Version  1
I0725 10:42:24.245662    3241 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 10:42:24.245927    3241 main.go:141] libmachine: () Calling .GetMachineName
I0725 10:42:24.246055    3241 main.go:141] libmachine: (functional-402000) Calling .DriverName
I0725 10:42:24.246222    3241 ssh_runner.go:195] Run: systemctl --version
I0725 10:42:24.246242    3241 main.go:141] libmachine: (functional-402000) Calling .GetSSHHostname
I0725 10:42:24.246329    3241 main.go:141] libmachine: (functional-402000) Calling .GetSSHPort
I0725 10:42:24.246399    3241 main.go:141] libmachine: (functional-402000) Calling .GetSSHKeyPath
I0725 10:42:24.246484    3241 main.go:141] libmachine: (functional-402000) Calling .GetSSHUsername
I0725 10:42:24.246578    3241 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/functional-402000/id_rsa Username:docker}
I0725 10:42:24.281710    3241 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3484080060.tar
I0725 10:42:24.281785    3241 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0725 10:42:24.291358    3241 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3484080060.tar
I0725 10:42:24.295968    3241 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3484080060.tar: stat -c "%s %y" /var/lib/minikube/build/build.3484080060.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3484080060.tar': No such file or directory
I0725 10:42:24.296002    3241 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3484080060.tar --> /var/lib/minikube/build/build.3484080060.tar (3072 bytes)
I0725 10:42:24.329400    3241 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3484080060
I0725 10:42:24.347192    3241 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3484080060 -xf /var/lib/minikube/build/build.3484080060.tar
I0725 10:42:24.361257    3241 docker.go:360] Building image: /var/lib/minikube/build/build.3484080060
I0725 10:42:24.361334    3241 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-402000 /var/lib/minikube/build/build.3484080060
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e9e7d983ffbc4429e776883ba89ac3fe15d393b21b3f461a661a05d8580cc423 done
#8 naming to localhost/my-image:functional-402000 done
#8 DONE 0.0s
I0725 10:42:26.166709    3241 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-402000 /var/lib/minikube/build/build.3484080060: (1.805322512s)
I0725 10:42:26.166768    3241 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3484080060
I0725 10:42:26.175954    3241 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3484080060.tar
I0725 10:42:26.185875    3241 build_images.go:217] Built localhost/my-image:functional-402000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3484080060.tar
I0725 10:42:26.185900    3241 build_images.go:133] succeeded building to: functional-402000
I0725 10:42:26.185904    3241 build_images.go:134] failed building to: 
I0725 10:42:26.185920    3241 main.go:141] libmachine: Making call to close driver server
I0725 10:42:26.185927    3241 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:26.186093    3241 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:26.186103    3241 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 10:42:26.186109    3241 main.go:141] libmachine: Making call to close driver server
I0725 10:42:26.186116    3241 main.go:141] libmachine: (functional-402000) Calling .Close
I0725 10:42:26.186116    3241 main.go:141] libmachine: (functional-402000) DBG | Closing plugin on server side
I0725 10:42:26.186240    3241 main.go:141] libmachine: Successfully made call to close driver server
I0725 10:42:26.186248    3241 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
2024/07/25 10:42:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.667428803s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-402000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-402000 docker-env) && out/minikube-darwin-amd64 status -p functional-402000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-402000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image load --daemon docker.io/kicbase/echo-server:functional-402000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image load --daemon docker.io/kicbase/echo-server:functional-402000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-402000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image load --daemon docker.io/kicbase/echo-server:functional-402000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image save docker.io/kicbase/echo-server:functional-402000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image rm docker.io/kicbase/echo-server:functional-402000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-402000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 image save --daemon docker.io/kicbase/echo-server:functional-402000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-402000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-402000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-402000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ss4qf" [c69b5a2f-7690-41b0-b9d9-face641a3de2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ss4qf" [c69b5a2f-7690-41b0-b9d9-face641a3de2] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.005540415s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service list -o json
functional_test.go:1490: Took "180.420646ms" to run "out/minikube-darwin-amd64 -p functional-402000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:30685
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:30685
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2960: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-402000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9a1c3451-b015-49d1-8f43-bdb7e30fd23d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9a1c3451-b015-49d1-8f43-bdb7e30fd23d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003094992s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-402000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.63.38 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-402000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "194.141891ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.102773ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "182.224515ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "77.26358ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port81729919/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721929332686181000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port81729919/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721929332686181000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port81729919/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721929332686181000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port81729919/001/test-1721929332686181000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.596584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 17:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 17:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 17:42 test-1721929332686181000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh cat /mount-9p/test-1721929332686181000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-402000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [da825798-3737-40bb-b4cc-446532d687a0] Pending
helpers_test.go:344: "busybox-mount" [da825798-3737-40bb-b4cc-446532d687a0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [da825798-3737-40bb-b4cc-446532d687a0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [da825798-3737-40bb-b4cc-446532d687a0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003619176s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-402000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port81729919/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1690554655/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.324098ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1690554655/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "sudo umount -f /mount-9p"
E0725 10:42:20.159690    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh "sudo umount -f /mount-9p": exit status 1 (179.569936ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-402000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1690554655/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T" /mount1: exit status 1 (180.255326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-402000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-402000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-402000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1699522809/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-402000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-402000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-402000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-485000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0725 10:42:47.854867    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-485000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m25.3331402s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-485000 -- rollout status deployment/busybox: (2.973987263s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-4r7sr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-fmpmr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-zq4hj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-4r7sr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-fmpmr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-zq4hj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-4r7sr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-fmpmr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-zq4hj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-4r7sr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-4r7sr -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-fmpmr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-fmpmr -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-zq4hj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-485000 -- exec busybox-fc5497c4f-zq4hj -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-485000 -v=7 --alsologtostderr
E0725 10:46:24.581556    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.586766    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.598305    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.620183    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.661481    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.742923    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:24.904450    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:25.225192    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:25.867155    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:27.147921    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:29.709623    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:34.830944    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 10:46:45.071613    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-485000 -v=7 --alsologtostderr: (49.171412827s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-485000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp testdata/cp-test.txt ha-485000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000:/home/docker/cp-test.txt ha-485000-m02:/home/docker/cp-test_ha-485000_ha-485000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test_ha-485000_ha-485000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000:/home/docker/cp-test.txt ha-485000-m03:/home/docker/cp-test_ha-485000_ha-485000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test_ha-485000_ha-485000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000:/home/docker/cp-test.txt ha-485000-m04:/home/docker/cp-test_ha-485000_ha-485000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test_ha-485000_ha-485000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp testdata/cp-test.txt ha-485000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m02:/home/docker/cp-test.txt ha-485000:/home/docker/cp-test_ha-485000-m02_ha-485000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test_ha-485000-m02_ha-485000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m02:/home/docker/cp-test.txt ha-485000-m03:/home/docker/cp-test_ha-485000-m02_ha-485000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test.txt"
E0725 10:47:05.552055    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test_ha-485000-m02_ha-485000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m02:/home/docker/cp-test.txt ha-485000-m04:/home/docker/cp-test_ha-485000-m02_ha-485000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test_ha-485000-m02_ha-485000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp testdata/cp-test.txt ha-485000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt ha-485000:/home/docker/cp-test_ha-485000-m03_ha-485000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test_ha-485000-m03_ha-485000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt ha-485000-m02:/home/docker/cp-test_ha-485000-m03_ha-485000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test_ha-485000-m03_ha-485000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m03:/home/docker/cp-test.txt ha-485000-m04:/home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test_ha-485000-m03_ha-485000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp testdata/cp-test.txt ha-485000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1573436771/001/cp-test_ha-485000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt ha-485000:/home/docker/cp-test_ha-485000-m04_ha-485000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000 "sudo cat /home/docker/cp-test_ha-485000-m04_ha-485000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt ha-485000-m02:/home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m02 "sudo cat /home/docker/cp-test_ha-485000-m04_ha-485000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 cp ha-485000-m04:/home/docker/cp-test.txt ha-485000-m03:/home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 ssh -n ha-485000-m03 "sudo cat /home/docker/cp-test_ha-485000-m04_ha-485000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 node stop m02 -v=7 --alsologtostderr: (8.339477209s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr: exit status 7 (347.596588ms)

                                                
                                                
-- stdout --
	ha-485000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-485000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-485000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-485000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:47:19.034493    3707 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:47:19.034769    3707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:47:19.034775    3707 out.go:304] Setting ErrFile to fd 2...
	I0725 10:47:19.034779    3707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:47:19.034960    3707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:47:19.035159    3707 out.go:298] Setting JSON to false
	I0725 10:47:19.035181    3707 mustload.go:65] Loading cluster: ha-485000
	I0725 10:47:19.035226    3707 notify.go:220] Checking for updates...
	I0725 10:47:19.035499    3707 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:47:19.035514    3707 status.go:255] checking status of ha-485000 ...
	I0725 10:47:19.035908    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.035964    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.044749    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51749
	I0725 10:47:19.045107    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.045573    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.045589    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.045794    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.045916    3707 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:47:19.046009    3707 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:47:19.046097    3707 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3271
	I0725 10:47:19.047092    3707 status.go:330] ha-485000 host status = "Running" (err=<nil>)
	I0725 10:47:19.047110    3707 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:47:19.047355    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.047375    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.055634    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51751
	I0725 10:47:19.055986    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.056325    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.056339    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.056563    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.056676    3707 main.go:141] libmachine: (ha-485000) Calling .GetIP
	I0725 10:47:19.056771    3707 host.go:66] Checking if "ha-485000" exists ...
	I0725 10:47:19.057031    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.057056    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.068367    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51753
	I0725 10:47:19.068735    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.069046    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.069056    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.069269    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.069379    3707 main.go:141] libmachine: (ha-485000) Calling .DriverName
	I0725 10:47:19.069506    3707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:47:19.069527    3707 main.go:141] libmachine: (ha-485000) Calling .GetSSHHostname
	I0725 10:47:19.069612    3707 main.go:141] libmachine: (ha-485000) Calling .GetSSHPort
	I0725 10:47:19.069690    3707 main.go:141] libmachine: (ha-485000) Calling .GetSSHKeyPath
	I0725 10:47:19.069765    3707 main.go:141] libmachine: (ha-485000) Calling .GetSSHUsername
	I0725 10:47:19.069845    3707 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000/id_rsa Username:docker}
	I0725 10:47:19.098602    3707 ssh_runner.go:195] Run: systemctl --version
	I0725 10:47:19.103489    3707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:47:19.114968    3707 kubeconfig.go:125] found "ha-485000" server: "https://192.169.0.254:8443"
	I0725 10:47:19.114994    3707 api_server.go:166] Checking apiserver status ...
	I0725 10:47:19.115032    3707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:47:19.126570    3707 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2043/cgroup
	W0725 10:47:19.134764    3707 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2043/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:47:19.134808    3707 ssh_runner.go:195] Run: ls
	I0725 10:47:19.138103    3707 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0725 10:47:19.141374    3707 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0725 10:47:19.141385    3707 status.go:422] ha-485000 apiserver status = Running (err=<nil>)
	I0725 10:47:19.141394    3707 status.go:257] ha-485000 status: &{Name:ha-485000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:47:19.141407    3707 status.go:255] checking status of ha-485000-m02 ...
	I0725 10:47:19.141668    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.141690    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.150335    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I0725 10:47:19.150713    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.151048    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.151058    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.151283    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.151393    3707 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:47:19.151484    3707 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:47:19.151558    3707 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3286
	I0725 10:47:19.152534    3707 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3286 missing from process table
	I0725 10:47:19.152569    3707 status.go:330] ha-485000-m02 host status = "Stopped" (err=<nil>)
	I0725 10:47:19.152578    3707 status.go:343] host is not running, skipping remaining checks
	I0725 10:47:19.152585    3707 status.go:257] ha-485000-m02 status: &{Name:ha-485000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:47:19.152596    3707 status.go:255] checking status of ha-485000-m03 ...
	I0725 10:47:19.152866    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.152901    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.161803    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I0725 10:47:19.162196    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.162559    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.162576    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.162798    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.162933    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetState
	I0725 10:47:19.163010    3707 main.go:141] libmachine: (ha-485000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:47:19.163108    3707 main.go:141] libmachine: (ha-485000-m03) DBG | hyperkit pid from json: 3293
	I0725 10:47:19.164092    3707 status.go:330] ha-485000-m03 host status = "Running" (err=<nil>)
	I0725 10:47:19.164101    3707 host.go:66] Checking if "ha-485000-m03" exists ...
	I0725 10:47:19.164356    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.164380    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.172793    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51761
	I0725 10:47:19.173136    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.173480    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.173491    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.173693    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.173798    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetIP
	I0725 10:47:19.173882    3707 host.go:66] Checking if "ha-485000-m03" exists ...
	I0725 10:47:19.174147    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.174169    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.182497    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51763
	I0725 10:47:19.182841    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.183193    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.183208    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.183401    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.183497    3707 main.go:141] libmachine: (ha-485000-m03) Calling .DriverName
	I0725 10:47:19.183628    3707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:47:19.183640    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHHostname
	I0725 10:47:19.183727    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHPort
	I0725 10:47:19.183801    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHKeyPath
	I0725 10:47:19.183874    3707 main.go:141] libmachine: (ha-485000-m03) Calling .GetSSHUsername
	I0725 10:47:19.183949    3707 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m03/id_rsa Username:docker}
	I0725 10:47:19.218923    3707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:47:19.229363    3707 kubeconfig.go:125] found "ha-485000" server: "https://192.169.0.254:8443"
	I0725 10:47:19.229376    3707 api_server.go:166] Checking apiserver status ...
	I0725 10:47:19.229413    3707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 10:47:19.240404    3707 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2009/cgroup
	W0725 10:47:19.248003    3707 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2009/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 10:47:19.248063    3707 ssh_runner.go:195] Run: ls
	I0725 10:47:19.251417    3707 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0725 10:47:19.254515    3707 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0725 10:47:19.254526    3707 status.go:422] ha-485000-m03 apiserver status = Running (err=<nil>)
	I0725 10:47:19.254535    3707 status.go:257] ha-485000-m03 status: &{Name:ha-485000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:47:19.254552    3707 status.go:255] checking status of ha-485000-m04 ...
	I0725 10:47:19.254834    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.254856    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.263383    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51767
	I0725 10:47:19.263741    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.264101    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.264115    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.264333    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.264446    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:47:19.264531    3707 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:47:19.264611    3707 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3386
	I0725 10:47:19.265588    3707 status.go:330] ha-485000-m04 host status = "Running" (err=<nil>)
	I0725 10:47:19.265598    3707 host.go:66] Checking if "ha-485000-m04" exists ...
	I0725 10:47:19.265837    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.265867    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.274357    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0725 10:47:19.274718    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.275070    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.275084    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.275279    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.275381    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetIP
	I0725 10:47:19.275472    3707 host.go:66] Checking if "ha-485000-m04" exists ...
	I0725 10:47:19.275745    3707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:47:19.275766    3707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:47:19.284576    3707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51771
	I0725 10:47:19.284944    3707 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:47:19.285281    3707 main.go:141] libmachine: Using API Version  1
	I0725 10:47:19.285306    3707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:47:19.285505    3707 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:47:19.285604    3707 main.go:141] libmachine: (ha-485000-m04) Calling .DriverName
	I0725 10:47:19.285728    3707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:47:19.285749    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHHostname
	I0725 10:47:19.285829    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHPort
	I0725 10:47:19.285914    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHKeyPath
	I0725 10:47:19.286003    3707 main.go:141] libmachine: (ha-485000-m04) Calling .GetSSHUsername
	I0725 10:47:19.286089    3707 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/ha-485000-m04/id_rsa Username:docker}
	I0725 10:47:19.315982    3707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 10:47:19.327131    3707 status.go:257] ha-485000-m04 status: &{Name:ha-485000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 node start m02 -v=7 --alsologtostderr
E0725 10:47:20.173524    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 10:47:46.513640    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 node start m02 -v=7 --alsologtostderr: (38.386741488s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 stop -v=7 --alsologtostderr
E0725 10:52:20.179297    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-485000 stop -v=7 --alsologtostderr: (24.833673416s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr: exit status 7 (89.758602ms)

                                                
                                                
-- stdout --
	ha-485000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-485000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-485000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:52:22.375911    3904 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:52:22.376174    3904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:52:22.376179    3904 out.go:304] Setting ErrFile to fd 2...
	I0725 10:52:22.376183    3904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:52:22.376336    3904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 10:52:22.376508    3904 out.go:298] Setting JSON to false
	I0725 10:52:22.376531    3904 mustload.go:65] Loading cluster: ha-485000
	I0725 10:52:22.376570    3904 notify.go:220] Checking for updates...
	I0725 10:52:22.376827    3904 config.go:182] Loaded profile config "ha-485000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:52:22.376842    3904 status.go:255] checking status of ha-485000 ...
	I0725 10:52:22.377181    3904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:52:22.377232    3904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:52:22.386188    3904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52121
	I0725 10:52:22.386517    3904 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:52:22.386907    3904 main.go:141] libmachine: Using API Version  1
	I0725 10:52:22.386918    3904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:52:22.387141    3904 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:52:22.387248    3904 main.go:141] libmachine: (ha-485000) Calling .GetState
	I0725 10:52:22.387340    3904 main.go:141] libmachine: (ha-485000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:52:22.387407    3904 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid from json: 3787
	I0725 10:52:22.388283    3904 main.go:141] libmachine: (ha-485000) DBG | hyperkit pid 3787 missing from process table
	I0725 10:52:22.388317    3904 status.go:330] ha-485000 host status = "Stopped" (err=<nil>)
	I0725 10:52:22.388327    3904 status.go:343] host is not running, skipping remaining checks
	I0725 10:52:22.388333    3904 status.go:257] ha-485000 status: &{Name:ha-485000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:52:22.388355    3904 status.go:255] checking status of ha-485000-m02 ...
	I0725 10:52:22.388596    3904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:52:22.388616    3904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:52:22.396750    3904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52123
	I0725 10:52:22.397085    3904 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:52:22.397444    3904 main.go:141] libmachine: Using API Version  1
	I0725 10:52:22.397461    3904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:52:22.397719    3904 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:52:22.397857    3904 main.go:141] libmachine: (ha-485000-m02) Calling .GetState
	I0725 10:52:22.397954    3904 main.go:141] libmachine: (ha-485000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:52:22.398023    3904 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid from json: 3792
	I0725 10:52:22.398925    3904 main.go:141] libmachine: (ha-485000-m02) DBG | hyperkit pid 3792 missing from process table
	I0725 10:52:22.398955    3904 status.go:330] ha-485000-m02 host status = "Stopped" (err=<nil>)
	I0725 10:52:22.398960    3904 status.go:343] host is not running, skipping remaining checks
	I0725 10:52:22.398968    3904 status.go:257] ha-485000-m02 status: &{Name:ha-485000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:52:22.398980    3904 status.go:255] checking status of ha-485000-m04 ...
	I0725 10:52:22.399229    3904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 10:52:22.399251    3904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 10:52:22.407503    3904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52125
	I0725 10:52:22.407808    3904 main.go:141] libmachine: () Calling .GetVersion
	I0725 10:52:22.408149    3904 main.go:141] libmachine: Using API Version  1
	I0725 10:52:22.408165    3904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 10:52:22.408380    3904 main.go:141] libmachine: () Calling .GetMachineName
	I0725 10:52:22.408480    3904 main.go:141] libmachine: (ha-485000-m04) Calling .GetState
	I0725 10:52:22.408567    3904 main.go:141] libmachine: (ha-485000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 10:52:22.408633    3904 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid from json: 3811
	I0725 10:52:22.409516    3904 main.go:141] libmachine: (ha-485000-m04) DBG | hyperkit pid 3811 missing from process table
	I0725 10:52:22.409544    3904 status.go:330] ha-485000-m04 host status = "Stopped" (err=<nil>)
	I0725 10:52:22.409550    3904 status.go:343] host is not running, skipping remaining checks
	I0725 10:52:22.409557    3904 status.go:257] ha-485000-m04 status: &{Name:ha-485000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-485000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0725 10:53:43.234401    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-485000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (1m40.530269968s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-485000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-485000 --control-plane -v=7 --alsologtostderr: (1m14.728071995s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-485000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-515000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-515000 --driver=hyperkit : (38.24432723s)
--- PASS: TestImageBuild/serial/Setup (38.24s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-515000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-515000: (1.454063343s)
--- PASS: TestImageBuild/serial/NormalBuild (1.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-515000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-515000
E0725 10:56:24.588019    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-515000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-391000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0725 10:57:20.181173    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-391000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m31.004377334s)
--- PASS: TestJSONOutput/start/Command (91.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-391000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-391000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-391000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-391000 --output=json --user=testUser: (8.349765565s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-520000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-520000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.073949ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7860f3db-a743-438e-b5eb-9e7614650de1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-520000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5597cbb-062a-4e20-bb6e-54ac982aaa04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"2bb11195-ac0b-45e3-a9cc-d7052a465bdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig"}}
	{"specversion":"1.0","id":"792ab791-1b8c-4748-8e5a-e4ba26298462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3edbf48d-dd9c-47d2-bc5c-c9fb3dadcba9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48b02688-d9ef-467e-9e59-8bff8661b091","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube"}}
	{"specversion":"1.0","id":"11e8f7dc-55fd-4749-845b-171a15dd5bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7a4d353-a44e-4347-87b6-fe93f6c54e66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-520000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (90.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-950000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-950000 --driver=hyperkit : (40.571239985s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-952000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-952000 --driver=hyperkit : (40.871247947s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-950000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-952000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-952000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-952000: (3.404036476s)
helpers_test.go:175: Cleaning up "first-950000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-950000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-950000: (5.234593716s)
--- PASS: TestMinikubeProfile (90.89s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-624000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0725 11:02:20.185756    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:02:47.649308    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-624000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m50.550624689s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-624000 -- rollout status deployment/busybox: (2.720467265s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-fmgpr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-ftcgr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-fmgpr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-ftcgr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-fmgpr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-ftcgr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-fmgpr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-fmgpr -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-ftcgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-624000 -- exec busybox-fc5497c4f-ftcgr -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-624000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-624000 -v 3 --alsologtostderr: (45.225312807s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-624000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp testdata/cp-test.txt multinode-624000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1246302505/001/cp-test_multinode-624000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000:/home/docker/cp-test.txt multinode-624000-m02:/home/docker/cp-test_multinode-624000_multinode-624000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test_multinode-624000_multinode-624000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000:/home/docker/cp-test.txt multinode-624000-m03:/home/docker/cp-test_multinode-624000_multinode-624000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test_multinode-624000_multinode-624000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp testdata/cp-test.txt multinode-624000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1246302505/001/cp-test_multinode-624000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m02:/home/docker/cp-test.txt multinode-624000:/home/docker/cp-test_multinode-624000-m02_multinode-624000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test_multinode-624000-m02_multinode-624000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m02:/home/docker/cp-test.txt multinode-624000-m03:/home/docker/cp-test_multinode-624000-m02_multinode-624000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test_multinode-624000-m02_multinode-624000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp testdata/cp-test.txt multinode-624000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1246302505/001/cp-test_multinode-624000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m03:/home/docker/cp-test.txt multinode-624000:/home/docker/cp-test_multinode-624000-m03_multinode-624000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000 "sudo cat /home/docker/cp-test_multinode-624000-m03_multinode-624000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 cp multinode-624000-m03:/home/docker/cp-test.txt multinode-624000-m02:/home/docker/cp-test_multinode-624000-m03_multinode-624000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 ssh -n multinode-624000-m02 "sudo cat /home/docker/cp-test_multinode-624000-m03_multinode-624000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-624000 node stop m03: (2.330806139s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-624000 status: exit status 7 (250.50719ms)

                                                
                                                
-- stdout --
	multinode-624000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-624000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-624000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr: exit status 7 (242.508051ms)

                                                
                                                
-- stdout --
	multinode-624000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-624000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-624000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:04:54.508330    4946 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:04:54.508505    4946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:04:54.508510    4946 out.go:304] Setting ErrFile to fd 2...
	I0725 11:04:54.508514    4946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:04:54.508699    4946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:04:54.508871    4946 out.go:298] Setting JSON to false
	I0725 11:04:54.508893    4946 mustload.go:65] Loading cluster: multinode-624000
	I0725 11:04:54.508932    4946 notify.go:220] Checking for updates...
	I0725 11:04:54.509203    4946 config.go:182] Loaded profile config "multinode-624000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:04:54.509221    4946 status.go:255] checking status of multinode-624000 ...
	I0725 11:04:54.509577    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.509626    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.518543    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53094
	I0725 11:04:54.518877    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.519257    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.519266    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.519491    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.519610    4946 main.go:141] libmachine: (multinode-624000) Calling .GetState
	I0725 11:04:54.519688    4946 main.go:141] libmachine: (multinode-624000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:04:54.519760    4946 main.go:141] libmachine: (multinode-624000) DBG | hyperkit pid from json: 4609
	I0725 11:04:54.520920    4946 status.go:330] multinode-624000 host status = "Running" (err=<nil>)
	I0725 11:04:54.520939    4946 host.go:66] Checking if "multinode-624000" exists ...
	I0725 11:04:54.521180    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.521202    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.529515    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53096
	I0725 11:04:54.529879    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.530211    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.530237    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.530512    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.530645    4946 main.go:141] libmachine: (multinode-624000) Calling .GetIP
	I0725 11:04:54.530750    4946 host.go:66] Checking if "multinode-624000" exists ...
	I0725 11:04:54.530995    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.531017    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.539361    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53098
	I0725 11:04:54.539693    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.540004    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.540014    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.540214    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.540331    4946 main.go:141] libmachine: (multinode-624000) Calling .DriverName
	I0725 11:04:54.540474    4946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 11:04:54.540496    4946 main.go:141] libmachine: (multinode-624000) Calling .GetSSHHostname
	I0725 11:04:54.540572    4946 main.go:141] libmachine: (multinode-624000) Calling .GetSSHPort
	I0725 11:04:54.540667    4946 main.go:141] libmachine: (multinode-624000) Calling .GetSSHKeyPath
	I0725 11:04:54.540748    4946 main.go:141] libmachine: (multinode-624000) Calling .GetSSHUsername
	I0725 11:04:54.540847    4946 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/multinode-624000/id_rsa Username:docker}
	I0725 11:04:54.571520    4946 ssh_runner.go:195] Run: systemctl --version
	I0725 11:04:54.575928    4946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:04:54.586928    4946 kubeconfig.go:125] found "multinode-624000" server: "https://192.169.0.14:8443"
	I0725 11:04:54.586952    4946 api_server.go:166] Checking apiserver status ...
	I0725 11:04:54.586992    4946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:04:54.598224    4946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1983/cgroup
	W0725 11:04:54.605493    4946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1983/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:04:54.605535    4946 ssh_runner.go:195] Run: ls
	I0725 11:04:54.608865    4946 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0725 11:04:54.612039    4946 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0725 11:04:54.612049    4946 status.go:422] multinode-624000 apiserver status = Running (err=<nil>)
	I0725 11:04:54.612064    4946 status.go:257] multinode-624000 status: &{Name:multinode-624000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:04:54.612077    4946 status.go:255] checking status of multinode-624000-m02 ...
	I0725 11:04:54.612317    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.612336    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.621025    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53102
	I0725 11:04:54.621367    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.621734    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.621752    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.621980    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.622103    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetState
	I0725 11:04:54.622187    4946 main.go:141] libmachine: (multinode-624000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:04:54.622261    4946 main.go:141] libmachine: (multinode-624000-m02) DBG | hyperkit pid from json: 4640
	I0725 11:04:54.623425    4946 status.go:330] multinode-624000-m02 host status = "Running" (err=<nil>)
	I0725 11:04:54.623434    4946 host.go:66] Checking if "multinode-624000-m02" exists ...
	I0725 11:04:54.623677    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.623699    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.632112    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53104
	I0725 11:04:54.632464    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.632817    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.632842    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.633075    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.633201    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetIP
	I0725 11:04:54.633286    4946 host.go:66] Checking if "multinode-624000-m02" exists ...
	I0725 11:04:54.633553    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.633576    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.641999    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53106
	I0725 11:04:54.642339    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.642665    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.642675    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.642863    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.642989    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .DriverName
	I0725 11:04:54.643125    4946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 11:04:54.643137    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetSSHHostname
	I0725 11:04:54.643206    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetSSHPort
	I0725 11:04:54.643295    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetSSHKeyPath
	I0725 11:04:54.643385    4946 main.go:141] libmachine: (multinode-624000-m02) Calling .GetSSHUsername
	I0725 11:04:54.643459    4946 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1195/.minikube/machines/multinode-624000-m02/id_rsa Username:docker}
	I0725 11:04:54.673765    4946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:04:54.684808    4946 status.go:257] multinode-624000-m02 status: &{Name:multinode-624000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:04:54.684835    4946 status.go:255] checking status of multinode-624000-m03 ...
	I0725 11:04:54.685097    4946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:04:54.685125    4946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:04:54.693695    4946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53109
	I0725 11:04:54.694043    4946 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:04:54.694371    4946 main.go:141] libmachine: Using API Version  1
	I0725 11:04:54.694381    4946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:04:54.694602    4946 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:04:54.694725    4946 main.go:141] libmachine: (multinode-624000-m03) Calling .GetState
	I0725 11:04:54.694806    4946 main.go:141] libmachine: (multinode-624000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:04:54.694903    4946 main.go:141] libmachine: (multinode-624000-m03) DBG | hyperkit pid from json: 4726
	I0725 11:04:54.696016    4946 main.go:141] libmachine: (multinode-624000-m03) DBG | hyperkit pid 4726 missing from process table
	I0725 11:04:54.696066    4946 status.go:330] multinode-624000-m03 host status = "Stopped" (err=<nil>)
	I0725 11:04:54.696076    4946 status.go:343] host is not running, skipping remaining checks
	I0725 11:04:54.696083    4946 status.go:257] multinode-624000-m03 status: &{Name:multinode-624000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-624000 node start m03 -v=7 --alsologtostderr: (41.305674512s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (207.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-624000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-624000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-624000: (18.805152308s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-624000 --wait=true -v=8 --alsologtostderr
E0725 11:06:24.602169    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:07:20.193959    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-624000 --wait=true -v=8 --alsologtostderr: (3m8.821589462s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-624000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (207.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-624000 node delete m03: (3.009586124s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-624000 stop: (16.596188913s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-624000 status: exit status 7 (77.830271ms)

                                                
                                                
-- stdout --
	multinode-624000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-624000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr: exit status 7 (77.908114ms)

                                                
                                                
-- stdout --
	multinode-624000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-624000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:09:24.170986    5180 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:09:24.171158    5180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:09:24.171164    5180 out.go:304] Setting ErrFile to fd 2...
	I0725 11:09:24.171168    5180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:09:24.171344    5180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1195/.minikube/bin
	I0725 11:09:24.171518    5180 out.go:298] Setting JSON to false
	I0725 11:09:24.171539    5180 mustload.go:65] Loading cluster: multinode-624000
	I0725 11:09:24.171577    5180 notify.go:220] Checking for updates...
	I0725 11:09:24.171843    5180 config.go:182] Loaded profile config "multinode-624000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:09:24.171857    5180 status.go:255] checking status of multinode-624000 ...
	I0725 11:09:24.172260    5180 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:09:24.172319    5180 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:09:24.181190    5180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53361
	I0725 11:09:24.181522    5180 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:09:24.181926    5180 main.go:141] libmachine: Using API Version  1
	I0725 11:09:24.181941    5180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:09:24.182136    5180 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:09:24.182278    5180 main.go:141] libmachine: (multinode-624000) Calling .GetState
	I0725 11:09:24.182376    5180 main.go:141] libmachine: (multinode-624000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:09:24.182448    5180 main.go:141] libmachine: (multinode-624000) DBG | hyperkit pid from json: 5027
	I0725 11:09:24.183343    5180 main.go:141] libmachine: (multinode-624000) DBG | hyperkit pid 5027 missing from process table
	I0725 11:09:24.183378    5180 status.go:330] multinode-624000 host status = "Stopped" (err=<nil>)
	I0725 11:09:24.183387    5180 status.go:343] host is not running, skipping remaining checks
	I0725 11:09:24.183393    5180 status.go:257] multinode-624000 status: &{Name:multinode-624000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:09:24.183415    5180 status.go:255] checking status of multinode-624000-m02 ...
	I0725 11:09:24.183650    5180 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0725 11:09:24.183674    5180 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0725 11:09:24.192066    5180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53363
	I0725 11:09:24.192421    5180 main.go:141] libmachine: () Calling .GetVersion
	I0725 11:09:24.192780    5180 main.go:141] libmachine: Using API Version  1
	I0725 11:09:24.192794    5180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 11:09:24.193054    5180 main.go:141] libmachine: () Calling .GetMachineName
	I0725 11:09:24.193198    5180 main.go:141] libmachine: (multinode-624000-m02) Calling .GetState
	I0725 11:09:24.193288    5180 main.go:141] libmachine: (multinode-624000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0725 11:09:24.193366    5180 main.go:141] libmachine: (multinode-624000-m02) DBG | hyperkit pid from json: 5080
	I0725 11:09:24.194257    5180 main.go:141] libmachine: (multinode-624000-m02) DBG | hyperkit pid 5080 missing from process table
	I0725 11:09:24.194292    5180 status.go:330] multinode-624000-m02 host status = "Stopped" (err=<nil>)
	I0725 11:09:24.194302    5180 status.go:343] host is not running, skipping remaining checks
	I0725 11:09:24.194308    5180 status.go:257] multinode-624000-m02 status: &{Name:multinode-624000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-624000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0725 11:10:23.253284    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:11:24.605564    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:12:20.198521    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-624000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m58.306449974s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-624000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-624000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-624000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-624000-m02 --driver=hyperkit : exit status 14 (469.872823ms)

                                                
                                                
-- stdout --
	* [multinode-624000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-624000-m02' is duplicated with machine name 'multinode-624000-m02' in profile 'multinode-624000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-624000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-624000-m03 --driver=hyperkit : (39.819074837s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-624000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-624000: exit status 80 (265.647638ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-624000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-624000-m03 already exists in multinode-624000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-624000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-624000-m03: (3.501394551s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.11s)

                                                
                                    
x
+
TestPreload (169s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-802000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-802000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m15.463648343s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-802000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-802000 image pull gcr.io/k8s-minikube/busybox: (1.217666022s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-802000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-802000: (8.366943588s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-802000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-802000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m18.575599612s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-802000 image list
helpers_test.go:175: Cleaning up "test-preload-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-802000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-802000: (5.222672113s)
--- PASS: TestPreload (169.00s)

                                                
                                    
x
+
TestSkaffold (112.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4269018541 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4269018541 version: (1.72482774s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-719000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-719000 --memory=2600 --driver=hyperkit : (36.393638931s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4269018541 run --minikube-profile skaffold-719000 --kube-context skaffold-719000 --status-check=true --port-forward=false --interactive=false
E0725 11:19:27.695072    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4269018541 run --minikube-profile skaffold-719000 --kube-context skaffold-719000 --status-check=true --port-forward=false --interactive=false: (55.852704959s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5dc756559-lfj7r" [f24f098a-ec9d-4dcb-b32a-fd89e9d30d6e] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005168036s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7876c8df7b-v9kcz" [3175b82a-bf4c-4b05-92ff-bc96e8aba2d4] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004328915s
helpers_test.go:175: Cleaning up "skaffold-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-719000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-719000: (5.246773162s)
--- PASS: TestSkaffold (112.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2222195556 start -p running-upgrade-692000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2222195556 start -p running-upgrade-692000 --memory=2200 --vm-driver=hyperkit : (1m1.571151095s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-692000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-692000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (22.333415172s)
helpers_test.go:175: Cleaning up "running-upgrade-692000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-692000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-692000: (5.254636457s)
--- PASS: TestRunningBinaryUpgrade (90.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19326
- KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3121397713/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3121397713/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3121397713/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3121397713/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.86s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19326
- KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1001047053/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1001047053/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1001047053/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1001047053/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (680.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3907042532 start -p stopped-upgrade-895000 --memory=2200 --vm-driver=hyperkit 
E0725 11:47:20.269003    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:50:04.771003    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 11:51:24.586899    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:52:20.179924    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:52:47.643773    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 11:55:04.769589    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3907042532 start -p stopped-upgrade-895000 --memory=2200 --vm-driver=hyperkit : (10m15.793069533s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3907042532 -p stopped-upgrade-895000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3907042532 -p stopped-upgrade-895000 stop: (8.278833927s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-895000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0725 11:57:20.179594    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 11:58:07.831363    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-895000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (56.500818183s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (680.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-895000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-895000: (2.59958366s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (662.939046ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-340000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1195/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1195/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-340000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-340000 --driver=hyperkit : (1m14.936468001s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-340000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m5.91605011s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --driver=hyperkit : (6.058190073s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-340000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-340000 status -o json: exit status 2 (149.722177ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-340000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-340000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-340000: (2.396181541s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-340000 --no-kubernetes --driver=hyperkit : (22.116922023s)
--- PASS: TestNoKubernetes/serial/Start (22.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pk9d6" [a35b7c22-c9c5-4f63-8f53-8fe4f96b8b8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pk9d6" [a35b7c22-c9c5-4f63-8f53-8fe4f96b8b8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003490692s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-340000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-340000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (124.856846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0725 12:00:04.769425    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-340000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-340000: (2.373935033s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-340000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-340000 --driver=hyperkit : (19.650992057s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-340000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-340000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (135.968825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m11.168436318s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0725 12:01:24.587509    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m14.136593661s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zktrn" [b699c9b6-74ed-4699-9c6a-4d0eacae5845] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004440363s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lrgmr" [24208109-5aac-4472-bc57-69f1ad3b955b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004585444s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k7qsg" [78ca2b39-18d1-4875-a6ec-d37bf78bfb50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k7qsg" [78ca2b39-18d1-4875-a6ec-d37bf78bfb50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003324457s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m8nhj" [25c7c2ae-da5c-499c-a117-994bc993902f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m8nhj" [25c7c2ae-da5c-499c-a117-994bc993902f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005746745s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m40.535194348s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-48gmz" [03223371-70a9-44e0-beeb-ff06e15d1a61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-48gmz" [03223371-70a9-44e0-beeb-ff06e15d1a61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004172872s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (53.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (53.52136123s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (53.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0725 12:05:04.038489    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.044254    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.055321    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.075673    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.115807    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.196877    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.357466    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.677616    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:04.807209    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 12:05:05.317932    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:06.598108    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:09.159031    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:14.279731    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:05:24.520890    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m5.401215016s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wbkmh" [77ad760c-c463-4475-bc4f-02c21d1b980a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wbkmh" [77ad760c-c463-4475-bc4f-02c21d1b980a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003420095s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6kmhm" [529d47c3-ff9c-4f8b-9cc2-044ba1ebfd0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6kmhm" [529d47c3-ff9c-4f8b-9cc2-044ba1ebfd0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003064066s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m21.869768659s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (53.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0725 12:06:24.624692    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 12:06:25.962275    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:06:40.777046    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:40.782167    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:40.792888    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:40.813080    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:40.853305    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:40.934044    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:41.095615    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:41.416409    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:42.057374    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:43.337864    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:45.898326    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:46.561957    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.567805    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.577940    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.598594    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.638997    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.720697    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:46.881393    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:47.201589    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:47.843297    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:49.123624    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:51.019648    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:06:51.684776    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:06:56.805104    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:07:01.259857    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:07:07.045824    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-691000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (53.821104653s)
--- PASS: TestNetworkPlugins/group/false/Start (53.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8wc67" [82f5a349-dbb3-4e7a-aedc-93a1a285d842] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8wc67" [82f5a349-dbb3-4e7a-aedc-93a1a285d842] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.002888226s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rfdbs" [d4524cc0-3a0c-4c49-9d31-c289aeeeb2f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005465117s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0725 12:07:20.217861    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-691000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-691000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7svqs" [a54fec23-2422-4740-a866-49ceddb3d622] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 12:07:27.527225    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7svqs" [a54fec23-2422-4740-a866-49ceddb3d622] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003595904s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-691000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-691000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)
E0725 12:23:09.663005    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-777000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-777000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m27.46526908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-710000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0725 12:08:02.702435    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:08:08.488701    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-710000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (57.201651862s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-710000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a555da6-57a8-4a0b-b2c7-497975d924b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a555da6-57a8-4a0b-b2c7-497975d924b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.005284028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-710000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-710000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-710000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-710000 --alsologtostderr -v=3
E0725 12:09:01.698728    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:01.705228    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:01.716999    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:01.737205    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:01.779390    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:01.859682    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:02.019885    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:02.342056    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:02.982344    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:04.264668    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-710000 --alsologtostderr -v=3: (8.432328251s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-710000 -n no-preload-710000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-710000 -n no-preload-710000: exit status 7 (66.29857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-710000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (382.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-710000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0725 12:09:06.825394    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:11.947013    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:22.188083    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:09:24.625054    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:09:27.683748    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 12:09:30.410097    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:09:42.669110    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:10:04.040433    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:10:04.810397    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-710000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (6m22.613794458s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-710000 -n no-preload-710000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (382.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [36f5996a-a3cf-43dd-892d-c2cc60331759] Pending
helpers_test.go:344: "busybox" [36f5996a-a3cf-43dd-892d-c2cc60331759] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [36f5996a-a3cf-43dd-892d-c2cc60331759] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005435369s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-777000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-777000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-777000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-777000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-777000 --alsologtostderr -v=3: (8.396534934s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-777000 -n old-k8s-version-777000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-777000 -n old-k8s-version-777000: exit status 7 (67.062128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-777000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0725 12:10:23.631433    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (401.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-777000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0725 12:10:24.966418    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:24.972262    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:24.982764    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:25.002852    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:25.043142    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:25.123320    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:25.284782    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:25.606035    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:26.247098    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:27.528413    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:30.089700    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:31.725712    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:10:35.209893    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:45.452021    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:10:47.003349    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.009828    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.022041    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.042921    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.083109    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.163392    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.324427    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:47.645588    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:48.287149    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:49.569282    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:52.131497    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:10:57.251903    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:11:05.933358    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:11:07.492094    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:11:24.626981    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 12:11:27.973536    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:11:40.781574    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:11:45.552794    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:11:46.565154    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:11:46.893904    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:12:08.467810    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:12:08.934280    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:12:10.076113    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.081308    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.091432    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.112721    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.153655    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.234454    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.396529    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:10.717261    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:11.359483    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:12.641729    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:14.252551    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:12:15.204025    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:16.406578    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.412984    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.423672    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.446018    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.486390    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.566529    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:16.727479    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:17.048154    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:17.688848    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:18.969009    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:20.219885    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
E0725 12:12:20.325518    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:21.530277    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:26.651142    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:30.566014    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:36.891743    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:12:51.047207    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:12:57.372288    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:13:08.816621    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:13:30.856420    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:13:32.007888    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:13:38.333406    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:14:01.699881    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:14:29.395764    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:14:47.874861    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 12:14:53.930180    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:15:00.255320    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:15:04.042978    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:15:04.813287    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 12:15:24.967770    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-777000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m41.069741536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-777000 -n old-k8s-version-777000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (401.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-9sm2g" [788d0acd-891b-4c92-b2c8-e0be6ade4e14] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-9sm2g" [788d0acd-891b-4c92-b2c8-e0be6ade4e14] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005085041s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-9sm2g" [788d0acd-891b-4c92-b2c8-e0be6ade4e14] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006080643s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-710000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-710000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-710000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-710000 -n no-preload-710000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-710000 -n no-preload-710000: exit status 2 (160.259025ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-710000 -n no-preload-710000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-710000 -n no-preload-710000: exit status 2 (161.229316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-710000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-710000 -n no-preload-710000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-710000 -n no-preload-710000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-599000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0725 12:15:47.006098    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:15:52.658823    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:16:14.698061    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/custom-flannel-691000/client.crt: no such file or directory
E0725 12:16:24.629914    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/functional-402000/client.crt: no such file or directory
E0725 12:16:40.782292    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kindnet-691000/client.crt: no such file or directory
E0725 12:16:46.566977    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/flannel-691000/client.crt: no such file or directory
E0725 12:17:03.282238    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-599000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (1m30.844243974s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g7znn" [c34864c0-638c-4377-97a7-477e991ebadb] Running
E0725 12:17:10.077877    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003316324s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g7znn" [c34864c0-638c-4377-97a7-477e991ebadb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004062334s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-777000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-777000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-777000 --alsologtostderr -v=1
E0725 12:17:16.410385    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-777000 -n old-k8s-version-777000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-777000 -n old-k8s-version-777000: exit status 2 (156.908134ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-777000 -n old-k8s-version-777000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-777000 -n old-k8s-version-777000: exit status 2 (154.506188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-777000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-777000 -n old-k8s-version-777000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-777000 -n old-k8s-version-777000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-599000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5aa63a7a-a563-404a-93a3-fd23565e8b20] Pending
helpers_test.go:344: "busybox" [5aa63a7a-a563-404a-93a3-fd23565e8b20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0725 12:17:20.223958    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/addons-545000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5aa63a7a-a563-404a-93a3-fd23565e8b20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003304523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-599000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (166.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-620000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-620000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (2m46.425377317s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (166.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-599000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-599000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-599000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-599000 --alsologtostderr -v=3: (8.414081512s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-599000 -n embed-certs-599000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-599000 -n embed-certs-599000: exit status 7 (66.895778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-599000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-599000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0725 12:17:37.772497    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/false-691000/client.crt: no such file or directory
E0725 12:17:44.097019    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/calico-691000/client.crt: no such file or directory
E0725 12:18:48.798958    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:48.804705    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:48.816909    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:48.838183    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:48.879044    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:48.959665    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:49.121485    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:49.442120    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:50.083477    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:51.365227    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:53.929679    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:18:59.054766    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:19:01.733436    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:19:09.301490    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:19:29.786646    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:20:04.086477    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/auto-691000/client.crt: no such file or directory
E0725 12:20:04.855546    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0725 12:20:05.212517    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.218063    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.230121    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.251033    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.292812    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.372992    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.533308    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:05.853797    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:06.494119    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:07.774692    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
E0725 12:20:10.335350    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-599000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (5m13.352074982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-599000 -n embed-certs-599000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-620000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff59968c-25fe-4921-b379-dc8f53cce394] Pending
E0725 12:20:10.749834    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ff59968c-25fe-4921-b379-dc8f53cce394] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ff59968c-25fe-4921-b379-dc8f53cce394] Running
E0725 12:20:15.455722    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003303109s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-620000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-620000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-620000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-620000 --alsologtostderr -v=3
E0725 12:20:25.012148    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/kubenet-691000/client.crt: no such file or directory
E0725 12:20:25.697184    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/old-k8s-version-777000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-620000 --alsologtostderr -v=3: (8.427231913s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-620000 -n default-k8s-diff-port-620000: exit status 7 (66.599732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-620000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-pwtcv" [027fac2f-ee74-456b-ad40-4af22fad7fd8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002961495s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-308000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-308000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (41.599523734s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-pwtcv" [027fac2f-ee74-456b-ad40-4af22fad7fd8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007874117s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-599000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-599000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-599000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-599000 -n embed-certs-599000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-599000 -n embed-certs-599000: exit status 2 (161.011225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-599000 -n embed-certs-599000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-599000 -n embed-certs-599000: exit status 2 (161.004317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-599000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-599000 -n embed-certs-599000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-599000 -n embed-certs-599000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-308000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-308000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-308000 --alsologtostderr -v=3: (8.457684941s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-308000 -n newest-cni-308000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-308000 -n newest-cni-308000: exit status 7 (67.337389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-308000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-308000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0725 12:23:48.823396    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
E0725 12:24:01.749782    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/bridge-691000/client.crt: no such file or directory
E0725 12:24:16.514774    1732 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1195/.minikube/profiles/no-preload-710000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-308000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (51.179163423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-308000 -n newest-cni-308000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-308000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-308000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-308000 -n newest-cni-308000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-308000 -n newest-cni-308000: exit status 2 (153.901916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-308000 -n newest-cni-308000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-308000 -n newest-cni-308000: exit status 2 (158.405784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-308000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-308000 -n newest-cni-308000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-308000 -n newest-cni-308000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                    

Test skip (22/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-691000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-691000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-691000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-691000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691000"

                                                
                                                
----------------------- debugLogs end: cilium-691000 [took: 5.531045452s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-691000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-691000
--- SKIP: TestNetworkPlugins/group/cilium (5.74s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-511000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-511000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard