Test Report: Hyperkit_macOS 20090

                    
                      20ecd3658b86897ae797acf630cebadf77816c63:2024-12-13:37470
                    
                

Test fail (13/221)

x
+
TestOffline (195.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-990000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-990000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m10.08602814s)

                                                
                                                
-- stdout --
	* [offline-docker-990000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-990000" primary control-plane node in "offline-docker-990000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-990000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:05:23.308666    7413 out.go:345] Setting OutFile to fd 1 ...
	I1213 12:05:23.310396    7413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:23.310406    7413 out.go:358] Setting ErrFile to fd 2...
	I1213 12:05:23.310412    7413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:23.310609    7413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 12:05:23.312535    7413 out.go:352] Setting JSON to false
	I1213 12:05:23.345527    7413 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3893,"bootTime":1734116430,"procs":553,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 12:05:23.345626    7413 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 12:05:23.415252    7413 out.go:177] * [offline-docker-990000] minikube v1.34.0 on Darwin 15.1.1
	I1213 12:05:23.458033    7413 notify.go:220] Checking for updates...
	I1213 12:05:23.458040    7413 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 12:05:23.479102    7413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 12:05:23.500098    7413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 12:05:23.520926    7413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:05:23.541111    7413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:05:23.562120    7413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:05:23.583155    7413 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 12:05:23.615065    7413 out.go:177] * Using the hyperkit driver based on user configuration
	I1213 12:05:23.655838    7413 start.go:297] selected driver: hyperkit
	I1213 12:05:23.655857    7413 start.go:901] validating driver "hyperkit" against <nil>
	I1213 12:05:23.655867    7413 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:05:23.661646    7413 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:05:23.661796    7413 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 12:05:23.673609    7413 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 12:05:23.680861    7413 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:05:23.680901    7413 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 12:05:23.680940    7413 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 12:05:23.681202    7413 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:05:23.681239    7413 cni.go:84] Creating CNI manager for ""
	I1213 12:05:23.681278    7413 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 12:05:23.681287    7413 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:05:23.681371    7413 start.go:340] cluster config:
	{Name:offline-docker-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:05:23.681459    7413 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:05:23.722997    7413 out.go:177] * Starting "offline-docker-990000" primary control-plane node in "offline-docker-990000" cluster
	I1213 12:05:23.744179    7413 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 12:05:23.744244    7413 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 12:05:23.744266    7413 cache.go:56] Caching tarball of preloaded images
	I1213 12:05:23.744481    7413 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 12:05:23.744498    7413 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 12:05:23.746921    7413 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/offline-docker-990000/config.json ...
	I1213 12:05:23.746980    7413 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/offline-docker-990000/config.json: {Name:mkc5b597ad25430e4d8b8eb9a1a3ae905b3a5e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:05:23.768182    7413 start.go:360] acquireMachinesLock for offline-docker-990000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:05:23.768376    7413 start.go:364] duration metric: took 159.923µs to acquireMachinesLock for "offline-docker-990000"
	I1213 12:05:23.768425    7413 start.go:93] Provisioning new machine with config: &{Name:offline-docker-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:05:23.768531    7413 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:05:23.851176    7413 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:05:23.851482    7413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:05:23.851531    7413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:05:23.864303    7413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53780
	I1213 12:05:23.864604    7413 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:05:23.865005    7413 main.go:141] libmachine: Using API Version  1
	I1213 12:05:23.865019    7413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:05:23.865275    7413 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:05:23.865394    7413 main.go:141] libmachine: (offline-docker-990000) Calling .GetMachineName
	I1213 12:05:23.865507    7413 main.go:141] libmachine: (offline-docker-990000) Calling .DriverName
	I1213 12:05:23.865619    7413 start.go:159] libmachine.API.Create for "offline-docker-990000" (driver="hyperkit")
	I1213 12:05:23.865639    7413 client.go:168] LocalClient.Create starting
	I1213 12:05:23.865674    7413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:05:23.865738    7413 main.go:141] libmachine: Decoding PEM data...
	I1213 12:05:23.865754    7413 main.go:141] libmachine: Parsing certificate...
	I1213 12:05:23.865832    7413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:05:23.865880    7413 main.go:141] libmachine: Decoding PEM data...
	I1213 12:05:23.865890    7413 main.go:141] libmachine: Parsing certificate...
	I1213 12:05:23.865903    7413 main.go:141] libmachine: Running pre-create checks...
	I1213 12:05:23.865914    7413 main.go:141] libmachine: (offline-docker-990000) Calling .PreCreateCheck
	I1213 12:05:23.866004    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:23.866208    7413 main.go:141] libmachine: (offline-docker-990000) Calling .GetConfigRaw
	I1213 12:05:23.872550    7413 main.go:141] libmachine: Creating machine...
	I1213 12:05:23.872577    7413 main.go:141] libmachine: (offline-docker-990000) Calling .Create
	I1213 12:05:23.872757    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:23.873063    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:05:23.872744    7433 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:05:23.873171    7413 main.go:141] libmachine: (offline-docker-990000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:05:24.267397    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:05:24.267292    7433 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/id_rsa...
	I1213 12:05:24.456220    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:05:24.456137    7433 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk...
	I1213 12:05:24.456232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Writing magic tar header
	I1213 12:05:24.456242    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Writing SSH key tar header
	I1213 12:05:24.456612    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:05:24.456570    7433 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000 ...
	I1213 12:05:24.914782    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:24.914805    7413 main.go:141] libmachine: (offline-docker-990000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid
	I1213 12:05:24.914815    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Using UUID 4b93ad34-950e-4a08-8df9-b99910d76adc
	I1213 12:05:25.022602    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Generated MAC 6e:9f:e0:ba:38:05
	I1213 12:05:25.022623    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000
	I1213 12:05:25.022656    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4b93ad34-950e-4a08-8df9-b99910d76adc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b25a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I1213 12:05:25.022727    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4b93ad34-950e-4a08-8df9-b99910d76adc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b25a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I1213 12:05:25.022773    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4b93ad34-950e-4a08-8df9-b99910d76adc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage,/Users
/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000"}
	I1213 12:05:25.022814    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4b93ad34-950e-4a08-8df9-b99910d76adc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/off
line-docker-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000"
	I1213 12:05:25.022828    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:05:25.026049    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 DEBUG: hyperkit: Pid is 7456
	I1213 12:05:25.026561    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 0
	I1213 12:05:25.026571    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:25.026652    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:25.027821    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:25.027939    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:25.027950    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:25.027972    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:25.027986    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:25.027998    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:25.028013    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:25.028084    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:25.028114    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:25.028130    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:25.028149    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:25.028164    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:25.028183    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:25.028204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:25.028219    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:25.028232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:25.028247    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:25.028271    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:25.028298    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:25.028310    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:25.028323    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:25.037543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:05:25.187490    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:05:25.188198    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:05:25.188222    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:05:25.188252    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:05:25.188281    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:05:25.571298    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:05:25.571313    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:05:25.686384    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:05:25.686405    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:05:25.686419    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:05:25.686425    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:05:25.687262    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:05:25.687272    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:05:27.028691    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 1
	I1213 12:05:27.028704    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:27.028805    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:27.029843    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:27.029944    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:27.029954    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:27.029963    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:27.029968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:27.029979    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:27.029987    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:27.029998    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:27.030005    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:27.030011    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:27.030017    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:27.030025    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:27.030033    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:27.030040    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:27.030055    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:27.030068    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:27.030078    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:27.030084    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:27.030090    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:27.030103    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:27.030127    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:29.030222    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 2
	I1213 12:05:29.030234    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:29.030296    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:29.031353    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:29.031437    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:29.031449    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:29.031457    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:29.031464    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:29.031471    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:29.031485    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:29.031490    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:29.031497    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:29.031504    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:29.031510    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:29.031515    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:29.031528    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:29.031547    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:29.031563    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:29.031575    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:29.031584    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:29.031593    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:29.031611    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:29.031621    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:29.031639    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:31.031698    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:05:31.031713    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 3
	I1213 12:05:31.031732    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:31.031780    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:31.031827    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:05:31.031839    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:05:31.032787    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:31.032886    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:31.032896    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:31.032906    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:31.032911    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:31.032917    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:31.032923    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:31.032929    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:31.032935    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:31.032955    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:31.032968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:31.032981    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:31.032990    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:31.033004    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:31.033018    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:31.033029    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:31.033041    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:31.033057    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:31.033069    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:31.033082    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:31.033090    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:31.052125    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:05:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:05:33.033136    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 4
	I1213 12:05:33.033151    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:33.033266    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:33.034321    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:33.034481    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:33.034495    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:33.034505    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:33.034515    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:33.034524    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:33.034533    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:33.034543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:33.034555    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:33.034569    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:33.034579    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:33.034589    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:33.034602    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:33.034613    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:33.034619    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:33.034625    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:33.034630    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:33.034665    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:33.034677    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:33.034685    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:33.034692    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:35.034810    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 5
	I1213 12:05:35.034822    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:35.034899    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:35.035897    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:35.035992    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:35.036001    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:35.036008    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:35.036014    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:35.036020    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:35.036025    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:35.036031    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:35.036039    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:35.036047    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:35.036053    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:35.036073    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:35.036082    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:35.036090    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:35.036097    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:35.036110    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:35.036131    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:35.036140    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:35.036147    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:35.036154    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:35.036162    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:37.037795    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 6
	I1213 12:05:37.037809    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:37.037852    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:37.038846    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:37.038929    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:37.038937    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:37.038953    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:37.038960    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:37.038968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:37.038975    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:37.038986    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:37.039000    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:37.039024    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:37.039037    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:37.039046    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:37.039051    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:37.039057    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:37.039071    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:37.039079    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:37.039086    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:37.039093    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:37.039099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:37.039106    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:37.039116    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:39.041040    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 7
	I1213 12:05:39.041055    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:39.041137    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:39.042142    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:39.042242    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:39.042250    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:39.042273    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:39.042283    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:39.042290    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:39.042298    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:39.042311    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:39.042319    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:39.042325    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:39.042340    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:39.042348    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:39.042364    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:39.042375    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:39.042383    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:39.042391    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:39.042397    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:39.042414    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:39.042421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:39.042430    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:39.042438    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:41.043296    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 8
	I1213 12:05:41.043321    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:41.043414    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:41.044562    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:41.044695    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:41.044705    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:41.044712    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:41.044718    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:41.044725    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:41.044734    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:41.044744    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:41.044751    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:41.044760    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:41.044773    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:41.044780    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:41.044787    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:41.044794    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:41.044801    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:41.044808    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:41.044813    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:41.044830    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:41.044842    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:41.044850    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:41.044857    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:43.046886    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 9
	I1213 12:05:43.046901    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:43.046949    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:43.047978    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:43.048107    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:43.048115    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:43.048123    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:43.048128    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:43.048134    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:43.048143    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:43.048149    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:43.048154    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:43.048164    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:43.048170    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:43.048177    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:43.048184    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:43.048190    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:43.048202    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:43.048217    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:43.048225    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:43.048232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:43.048238    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:43.048246    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:43.048254    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:45.050278    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 10
	I1213 12:05:45.050297    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:45.050345    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:45.051399    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:45.051505    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:45.051516    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:45.051523    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:45.051529    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:45.051547    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:45.051560    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:45.051585    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:45.051594    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:45.051601    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:45.051611    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:45.051626    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:45.051639    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:45.051653    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:45.051662    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:45.051671    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:45.051683    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:45.051695    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:45.051703    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:45.051719    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:45.051728    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:47.052794    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 11
	I1213 12:05:47.052808    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:47.052857    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:47.053894    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:47.053975    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:47.053987    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:47.053996    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:47.054002    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:47.054013    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:47.054037    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:47.054046    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:47.054053    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:47.054058    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:47.054065    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:47.054076    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:47.054084    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:47.054091    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:47.054098    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:47.054105    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:47.054111    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:47.054118    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:47.054132    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:47.054141    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:47.054149    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:49.054191    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 12
	I1213 12:05:49.054208    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:49.054275    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:49.055258    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:49.055393    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:49.055403    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:49.055412    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:49.055421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:49.055427    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:49.055435    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:49.055446    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:49.055459    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:49.055470    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:49.055476    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:49.055499    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:49.055511    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:49.055520    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:49.055533    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:49.055542    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:49.055551    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:49.055557    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:49.055563    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:49.055571    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:49.055580    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:51.056210    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 13
	I1213 12:05:51.056224    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:51.056287    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:51.057337    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:51.057407    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:51.057417    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:51.057430    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:51.057439    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:51.057451    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:51.057456    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:51.057483    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:51.057492    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:51.057499    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:51.057517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:51.057528    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:51.057536    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:51.057543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:51.057550    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:51.057558    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:51.057565    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:51.057571    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:51.057576    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:51.057583    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:51.057590    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:53.059617    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 14
	I1213 12:05:53.059632    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:53.059682    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:53.060733    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:53.060818    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:53.060829    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:53.060838    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:53.060844    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:53.060850    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:53.060855    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:53.060862    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:53.060867    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:53.060882    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:53.060893    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:53.060929    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:53.060938    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:53.060947    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:53.060955    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:53.060961    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:53.060970    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:53.060979    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:53.060987    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:53.061010    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:53.061022    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:55.062384    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 15
	I1213 12:05:55.062408    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:55.062432    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:55.063591    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:55.063664    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:55.063673    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:55.063685    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:55.063702    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:55.063711    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:55.063717    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:55.063724    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:55.063736    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:55.063743    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:55.063750    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:55.063778    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:55.063790    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:55.063798    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:55.063806    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:55.063812    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:55.063820    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:55.063847    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:55.063858    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:55.063868    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:55.063878    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:57.065076    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 16
	I1213 12:05:57.065091    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:57.065152    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:57.066155    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:57.066250    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:57.066257    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:57.066264    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:57.066271    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:57.066277    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:57.066282    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:57.066288    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:57.066294    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:57.066324    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:57.066347    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:57.066359    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:57.066367    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:57.066375    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:57.066383    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:57.066396    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:57.066410    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:57.066421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:57.066431    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:57.066440    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:57.066448    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:05:59.066527    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 17
	I1213 12:05:59.066543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:05:59.066611    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:05:59.067863    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:05:59.067968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:05:59.067992    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:05:59.068017    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:05:59.068029    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:05:59.068035    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:05:59.068047    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:05:59.068057    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:05:59.068068    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:05:59.068078    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:05:59.068093    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:05:59.068106    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:05:59.068123    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:05:59.068136    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:05:59.068144    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:05:59.068152    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:05:59.068163    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:05:59.068173    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:05:59.068179    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:05:59.068186    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:05:59.068191    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:01.068995    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 18
	I1213 12:06:01.069011    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:01.069087    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:01.070166    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:01.070285    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:01.070293    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:01.070301    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:01.070306    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:01.070312    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:01.070324    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:01.070334    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:01.070343    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:01.070352    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:01.070359    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:01.070377    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:01.070385    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:01.070392    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:01.070414    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:01.070426    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:01.070433    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:01.070441    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:01.070448    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:01.070454    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:01.070463    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:03.072507    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 19
	I1213 12:06:03.072522    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:03.072574    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:03.073676    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:03.073726    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:03.073734    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:03.073742    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:03.073749    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:03.073755    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:03.073760    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:03.073771    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:03.073779    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:03.073786    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:03.073817    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:03.073842    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:03.073853    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:03.073868    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:03.073877    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:03.073892    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:03.073906    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:03.073914    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:03.073920    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:03.073932    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:03.073944    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:05.074180    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 20
	I1213 12:06:05.074195    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:05.074228    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:05.075272    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:05.075326    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:05.075338    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:05.075359    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:05.075367    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:05.075374    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:05.075380    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:05.075386    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:05.075391    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:05.075398    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:05.075407    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:05.075413    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:05.075421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:05.075438    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:05.075457    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:05.075468    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:05.075476    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:05.075484    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:05.075491    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:05.075515    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:05.075534    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:07.077489    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 21
	I1213 12:06:07.077502    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:07.077543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:07.078876    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:07.078970    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:07.078993    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:07.079003    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:07.079011    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:07.079023    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:07.079032    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:07.079049    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:07.079065    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:07.079077    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:07.079088    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:07.079094    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:07.079100    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:07.079113    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:07.079120    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:07.079128    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:07.079134    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:07.079142    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:07.079151    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:07.079159    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:07.079166    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:09.080224    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 22
	I1213 12:06:09.080238    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:09.080294    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:09.081373    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:09.081451    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:09.081458    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:09.081468    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:09.081473    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:09.081480    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:09.081485    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:09.081513    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:09.081532    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:09.081544    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:09.081552    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:09.081559    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:09.081566    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:09.081573    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:09.081580    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:09.081587    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:09.081599    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:09.081608    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:09.081613    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:09.081620    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:09.081627    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:11.082244    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 23
	I1213 12:06:11.082257    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:11.082351    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:11.083347    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:11.083460    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:11.083470    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:11.083484    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:11.083495    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:11.083502    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:11.083510    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:11.083517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:11.083522    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:11.083529    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:11.083537    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:11.083545    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:11.083553    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:11.083569    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:11.083600    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:11.083607    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:11.083637    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:11.083643    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:11.083657    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:11.083676    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:11.083687    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:13.085514    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 24
	I1213 12:06:13.085529    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:13.085581    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:13.086677    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:13.086780    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:13.086812    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:13.086829    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:13.086838    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:13.086854    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:13.086865    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:13.086872    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:13.086881    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:13.086889    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:13.086894    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:13.086901    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:13.086910    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:13.086917    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:13.086927    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:13.086935    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:13.086941    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:13.086949    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:13.086955    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:13.086966    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:13.086974    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:15.088964    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 25
	I1213 12:06:15.088979    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:15.089028    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:15.090030    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:15.090115    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:15.090124    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:15.090131    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:15.090138    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:15.090150    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:15.090156    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:15.090163    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:15.090168    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:15.090177    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:15.090184    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:15.090192    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:15.090198    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:15.090210    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:15.090224    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:15.090232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:15.090240    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:15.090246    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:15.090254    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:15.090268    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:15.090297    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:17.092313    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 26
	I1213 12:06:17.092327    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:17.092374    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:17.093413    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:17.093494    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:17.093511    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:17.093524    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:17.093530    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:17.093543    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:17.093560    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:17.093570    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:17.093577    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:17.093585    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:17.093612    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:17.093624    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:17.093633    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:17.093640    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:17.093647    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:17.093653    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:17.093667    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:17.093679    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:17.093687    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:17.093694    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:17.093703    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:19.094862    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 27
	I1213 12:06:19.094876    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:19.094938    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:19.095924    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:19.096051    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:19.096061    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:19.096069    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:19.096074    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:19.096093    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:19.096099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:19.096106    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:19.096111    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:19.096117    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:19.096124    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:19.096139    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:19.096150    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:19.096164    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:19.096172    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:19.096179    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:19.096187    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:19.096195    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:19.096204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:19.096219    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:19.096232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:21.098224    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 28
	I1213 12:06:21.098239    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:21.098302    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:21.099311    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:21.099412    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:21.099424    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:21.099431    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:21.099439    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:21.099446    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:21.099454    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:21.099477    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:21.099489    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:21.099500    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:21.099508    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:21.099524    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:21.099538    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:21.099546    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:21.099554    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:21.099561    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:21.099568    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:21.099575    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:21.099583    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:21.099590    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:21.099595    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:23.101604    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 29
	I1213 12:06:23.101616    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:23.101694    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:23.102735    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for 6e:9f:e0:ba:38:05 in /var/db/dhcpd_leases ...
	I1213 12:06:23.102863    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:23.102875    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:23.102902    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:23.102912    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:23.102921    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:23.102929    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:23.102937    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:23.102945    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:23.102953    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:23.102959    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:23.102971    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:23.102977    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:23.102983    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:23.102995    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:23.103007    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:23.103015    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:23.103024    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:23.103031    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:23.103039    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:23.103047    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:25.104893    7413 client.go:171] duration metric: took 1m1.239981404s to LocalClient.Create
	I1213 12:06:27.107024    7413 start.go:128] duration metric: took 1m3.339219365s to createHost
	I1213 12:06:27.107074    7413 start.go:83] releasing machines lock for "offline-docker-990000", held for 1m3.339431288s
	W1213 12:06:27.107103    7413 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:9f:e0:ba:38:05
	I1213 12:06:27.107462    7413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:06:27.107490    7413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:06:27.119628    7413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53816
	I1213 12:06:27.120025    7413 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:06:27.120569    7413 main.go:141] libmachine: Using API Version  1
	I1213 12:06:27.120604    7413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:06:27.120836    7413 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:06:27.121287    7413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:06:27.121323    7413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:06:27.133333    7413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53818
	I1213 12:06:27.133718    7413 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:06:27.134091    7413 main.go:141] libmachine: Using API Version  1
	I1213 12:06:27.134107    7413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:06:27.134363    7413 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:06:27.134493    7413 main.go:141] libmachine: (offline-docker-990000) Calling .GetState
	I1213 12:06:27.134614    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.134670    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:27.135892    7413 main.go:141] libmachine: (offline-docker-990000) Calling .DriverName
	I1213 12:06:27.156307    7413 out.go:177] * Deleting "offline-docker-990000" in hyperkit ...
	I1213 12:06:27.198056    7413 main.go:141] libmachine: (offline-docker-990000) Calling .Remove
	I1213 12:06:27.198197    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.198207    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.198284    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:27.199464    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.199535    7413 main.go:141] libmachine: (offline-docker-990000) DBG | waiting for graceful shutdown
	I1213 12:06:28.201680    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:28.201787    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:28.202995    7413 main.go:141] libmachine: (offline-docker-990000) DBG | waiting for graceful shutdown
	I1213 12:06:29.203111    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:29.203201    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:29.204471    7413 main.go:141] libmachine: (offline-docker-990000) DBG | waiting for graceful shutdown
	I1213 12:06:30.204718    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:30.204808    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:30.205566    7413 main.go:141] libmachine: (offline-docker-990000) DBG | waiting for graceful shutdown
	I1213 12:06:31.206602    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:31.206680    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:31.207982    7413 main.go:141] libmachine: (offline-docker-990000) DBG | waiting for graceful shutdown
	I1213 12:06:32.209129    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:32.209198    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7456
	I1213 12:06:32.209925    7413 main.go:141] libmachine: (offline-docker-990000) DBG | sending sigkill
	I1213 12:06:32.209934    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:32.222508    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:06:32 WARN : hyperkit: failed to read stderr: EOF
	I1213 12:06:32.222530    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:06:32 WARN : hyperkit: failed to read stdout: EOF
	W1213 12:06:32.239566    7413 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:9f:e0:ba:38:05
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:9f:e0:ba:38:05
	I1213 12:06:32.239584    7413 start.go:729] Will try again in 5 seconds ...
	I1213 12:06:37.241624    7413 start.go:360] acquireMachinesLock for offline-docker-990000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:07:30.109952    7413 start.go:364] duration metric: took 52.73454045s to acquireMachinesLock for "offline-docker-990000"
	I1213 12:07:30.109983    7413 start.go:93] Provisioning new machine with config: &{Name:offline-docker-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:07:30.110043    7413 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:07:30.131471    7413 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:07:30.131554    7413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:07:30.131612    7413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:07:30.143308    7413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53826
	I1213 12:07:30.143664    7413 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:07:30.143981    7413 main.go:141] libmachine: Using API Version  1
	I1213 12:07:30.144009    7413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:07:30.144225    7413 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:07:30.144320    7413 main.go:141] libmachine: (offline-docker-990000) Calling .GetMachineName
	I1213 12:07:30.144424    7413 main.go:141] libmachine: (offline-docker-990000) Calling .DriverName
	I1213 12:07:30.144519    7413 start.go:159] libmachine.API.Create for "offline-docker-990000" (driver="hyperkit")
	I1213 12:07:30.144533    7413 client.go:168] LocalClient.Create starting
	I1213 12:07:30.144558    7413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:07:30.144621    7413 main.go:141] libmachine: Decoding PEM data...
	I1213 12:07:30.144631    7413 main.go:141] libmachine: Parsing certificate...
	I1213 12:07:30.144673    7413 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:07:30.144719    7413 main.go:141] libmachine: Decoding PEM data...
	I1213 12:07:30.144730    7413 main.go:141] libmachine: Parsing certificate...
	I1213 12:07:30.144743    7413 main.go:141] libmachine: Running pre-create checks...
	I1213 12:07:30.144748    7413 main.go:141] libmachine: (offline-docker-990000) Calling .PreCreateCheck
	I1213 12:07:30.144818    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.144843    7413 main.go:141] libmachine: (offline-docker-990000) Calling .GetConfigRaw
	I1213 12:07:30.180184    7413 main.go:141] libmachine: Creating machine...
	I1213 12:07:30.180192    7413 main.go:141] libmachine: (offline-docker-990000) Calling .Create
	I1213 12:07:30.180282    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.180497    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:07:30.180272    7623 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:07:30.180530    7413 main.go:141] libmachine: (offline-docker-990000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:07:30.389125    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:07:30.389057    7623 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/id_rsa...
	I1213 12:07:30.724349    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:07:30.724257    7623 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk...
	I1213 12:07:30.724367    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Writing magic tar header
	I1213 12:07:30.724380    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Writing SSH key tar header
	I1213 12:07:30.724931    7413 main.go:141] libmachine: (offline-docker-990000) DBG | I1213 12:07:30.724892    7623 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000 ...
	I1213 12:07:31.116143    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:31.116164    7413 main.go:141] libmachine: (offline-docker-990000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid
	I1213 12:07:31.116222    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Using UUID 499a1ca7-7da5-4914-9c11-21a5cefd7222
	I1213 12:07:31.138303    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Generated MAC a2:b1:35:67:82:21
	I1213 12:07:31.138319    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000
	I1213 12:07:31.138388    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"499a1ca7-7da5-4914-9c11-21a5cefd7222", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I1213 12:07:31.138427    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"499a1ca7-7da5-4914-9c11-21a5cefd7222", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I1213 12:07:31.138474    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "499a1ca7-7da5-4914-9c11-21a5cefd7222", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage,/Users
/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000"}
	I1213 12:07:31.138529    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 499a1ca7-7da5-4914-9c11-21a5cefd7222 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/offline-docker-990000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/off
line-docker-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-990000"
	I1213 12:07:31.138545    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:07:31.141564    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 DEBUG: hyperkit: Pid is 7624
	I1213 12:07:31.142140    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 0
	I1213 12:07:31.142152    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:31.142204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:31.143386    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:31.143517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:31.143536    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:31.143550    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:31.143563    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:31.143569    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:31.143580    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:31.143602    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:31.143619    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:31.143628    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:31.143634    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:31.143660    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:31.143669    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:31.143687    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:31.143696    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:31.143702    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:31.143709    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:31.143717    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:31.143725    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:31.143731    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:31.143766    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:31.152726    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:07:31.161695    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/offline-docker-990000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:07:31.162551    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:07:31.162575    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:07:31.162587    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:07:31.162625    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:07:31.543635    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:07:31.543652    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:07:31.658331    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:07:31.658362    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:07:31.658395    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:07:31.658411    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:07:31.659238    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:07:31.659250    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:07:33.145201    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 1
	I1213 12:07:33.145220    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:33.145291    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:33.146344    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:33.146406    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:33.146416    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:33.146424    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:33.146429    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:33.146435    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:33.146440    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:33.146446    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:33.146452    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:33.146462    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:33.146472    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:33.146492    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:33.146504    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:33.146519    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:33.146536    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:33.146545    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:33.146554    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:33.146566    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:33.146574    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:33.146587    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:33.146595    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:35.146794    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 2
	I1213 12:07:35.146809    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:35.146895    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:35.148013    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:35.148115    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:35.148126    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:35.148134    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:35.148140    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:35.148159    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:35.148166    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:35.148179    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:35.148186    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:35.148192    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:35.148198    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:35.148205    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:35.148213    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:35.148221    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:35.148228    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:35.148237    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:35.148243    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:35.148249    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:35.148265    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:35.148281    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:35.148298    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:37.003689    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:07:37.003778    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:07:37.003787    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:07:37.023032    7413 main.go:141] libmachine: (offline-docker-990000) DBG | 2024/12/13 12:07:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:07:37.149304    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 3
	I1213 12:07:37.149323    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:37.149492    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:37.150828    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:37.151016    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:37.151030    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:37.151052    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:37.151099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:37.151113    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:37.151125    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:37.151135    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:37.151144    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:37.151164    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:37.151180    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:37.151191    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:37.151201    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:37.151223    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:37.151234    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:37.151243    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:37.151270    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:37.151303    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:37.151319    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:37.151330    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:37.151345    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:39.152677    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 4
	I1213 12:07:39.152693    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:39.152793    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:39.153806    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:39.153913    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:39.153930    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:39.153948    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:39.153954    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:39.153961    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:39.153968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:39.153974    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:39.153981    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:39.154002    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:39.154015    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:39.154023    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:39.154032    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:39.154046    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:39.154054    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:39.154064    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:39.154072    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:39.154079    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:39.154086    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:39.154093    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:39.154099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:41.154193    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 5
	I1213 12:07:41.154209    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:41.154247    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:41.155248    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:41.155398    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:41.155413    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:41.155441    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:41.155453    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:41.155463    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:41.155478    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:41.155485    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:41.155491    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:41.155498    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:41.155504    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:41.155517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:41.155534    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:41.155551    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:41.155561    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:41.155581    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:41.155609    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:41.155616    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:41.155656    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:41.155662    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:41.155670    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:43.155901    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 6
	I1213 12:07:43.155915    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:43.156000    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:43.157014    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:43.157075    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:43.157083    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:43.157091    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:43.157099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:43.157112    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:43.157118    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:43.157125    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:43.157132    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:43.157145    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:43.157152    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:43.157158    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:43.157166    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:43.157173    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:43.157194    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:43.157204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:43.157211    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:43.157219    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:43.157226    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:43.157234    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:43.157277    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:45.158688    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 7
	I1213 12:07:45.158704    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:45.158779    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:45.159793    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:45.159864    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:45.159873    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:45.159885    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:45.159916    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:45.159928    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:45.159938    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:45.159946    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:45.159953    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:45.159960    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:45.159968    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:45.159982    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:45.159990    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:45.159997    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:45.160005    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:45.160015    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:45.160023    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:45.160029    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:45.160038    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:45.160045    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:45.160053    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:47.161951    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 8
	I1213 12:07:47.161963    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:47.162029    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:47.163078    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:47.163173    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:47.163183    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:47.163198    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:47.163207    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:47.163213    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:47.163219    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:47.163226    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:47.163234    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:47.163240    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:47.163247    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:47.163254    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:47.163262    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:47.163268    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:47.163277    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:47.163288    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:47.163296    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:47.163315    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:47.163328    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:47.163341    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:47.163353    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:49.163345    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 9
	I1213 12:07:49.163361    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:49.163451    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:49.164431    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:49.164528    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:49.164538    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:49.164546    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:49.164552    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:49.164558    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:49.164563    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:49.164569    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:49.164574    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:49.164589    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:49.164601    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:49.164623    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:49.164631    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:49.164638    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:49.164646    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:49.164660    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:49.164673    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:49.164682    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:49.164688    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:49.164694    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:49.164703    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:51.166774    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 10
	I1213 12:07:51.166791    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:51.166853    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:51.167913    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:51.168044    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:51.168054    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:51.168063    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:51.168069    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:51.168075    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:51.168080    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:51.168086    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:51.168093    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:51.168098    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:51.168116    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:51.168134    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:51.168145    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:51.168151    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:51.168158    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:51.168166    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:51.168174    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:51.168181    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:51.168194    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:51.168207    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:51.168217    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:53.170266    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 11
	I1213 12:07:53.170280    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:53.170345    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:53.171477    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:53.171557    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:53.171567    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:53.171575    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:53.171580    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:53.171586    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:53.171591    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:53.171616    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:53.171629    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:53.171636    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:53.171644    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:53.171664    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:53.171677    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:53.171684    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:53.171692    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:53.171699    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:53.171706    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:53.171713    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:53.171721    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:53.171728    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:53.171742    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:55.172768    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 12
	I1213 12:07:55.172782    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:55.172883    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:55.173884    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:55.173970    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:55.173991    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:55.174007    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:55.174022    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:55.174031    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:55.174042    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:55.174050    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:55.174056    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:55.174062    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:55.174076    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:55.174081    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:55.174090    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:55.174098    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:55.174107    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:55.174115    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:55.174129    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:55.174142    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:55.174162    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:55.174174    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:55.174184    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:57.176216    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 13
	I1213 12:07:57.176228    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:57.176278    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:57.177291    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:57.177408    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:57.177418    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:57.177426    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:57.177434    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:57.177445    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:57.177454    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:57.177467    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:57.177481    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:57.177490    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:57.177501    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:57.177517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:57.177525    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:57.177532    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:57.177539    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:57.177546    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:57.177553    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:57.177559    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:57.177567    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:57.177581    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:57.177594    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:59.179181    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 14
	I1213 12:07:59.179195    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:59.179254    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:07:59.180270    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:07:59.180364    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:59.180372    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:59.180384    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:59.180389    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:59.180395    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:59.180403    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:59.180409    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:59.180414    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:59.180421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:59.180438    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:59.180448    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:59.180455    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:59.180463    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:59.180474    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:59.180485    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:59.180493    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:59.180499    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:59.180511    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:59.180525    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:59.180544    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:01.181747    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 15
	I1213 12:08:01.181762    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:01.181822    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:01.182812    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:01.182948    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:01.182960    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:01.182969    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:01.182976    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:01.182982    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:01.182987    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:01.182993    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:01.183005    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:01.183020    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:01.183032    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:01.183051    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:01.183062    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:01.183069    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:01.183076    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:01.183082    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:01.183088    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:01.183099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:01.183110    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:01.183117    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:01.183122    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:03.184638    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 16
	I1213 12:08:03.184652    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:03.184723    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:03.185805    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:03.185927    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:03.185937    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:03.185945    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:03.185951    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:03.185972    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:03.185986    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:03.185993    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:03.186005    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:03.186014    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:03.186025    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:03.186033    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:03.186040    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:03.186045    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:03.186061    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:03.186073    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:03.186080    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:03.186088    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:03.186095    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:03.186103    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:03.186112    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:05.187932    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 17
	I1213 12:08:05.187945    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:05.188012    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:05.188994    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:05.189109    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:05.189120    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:05.189127    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:05.189140    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:05.189147    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:05.189155    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:05.189168    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:05.189186    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:05.189194    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:05.189203    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:05.189212    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:05.189220    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:05.189227    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:05.189233    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:05.189241    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:05.189253    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:05.189269    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:05.189281    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:05.189289    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:05.189295    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:07.190000    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 18
	I1213 12:08:07.190017    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:07.190066    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:07.191081    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:07.191201    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:07.191211    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:07.191220    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:07.191225    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:07.191231    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:07.191252    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:07.191258    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:07.191265    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:07.191279    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:07.191289    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:07.191296    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:07.191304    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:07.191313    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:07.191320    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:07.191334    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:07.191341    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:07.191347    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:07.191355    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:07.191363    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:07.191371    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:09.191541    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 19
	I1213 12:08:09.191557    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:09.191624    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:09.192766    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:09.192923    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:09.192933    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:09.192942    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:09.192947    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:09.192963    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:09.192974    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:09.192991    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:09.193004    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:09.193012    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:09.193020    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:09.193028    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:09.193036    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:09.193051    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:09.193066    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:09.193077    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:09.193089    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:09.193104    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:09.193115    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:09.193124    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:09.193131    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:11.194123    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 20
	I1213 12:08:11.194136    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:11.194182    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:11.195248    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:11.195337    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:11.195346    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:11.195361    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:11.195380    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:11.195402    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:11.195417    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:11.195424    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:11.195433    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:11.195441    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:11.195448    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:11.195455    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:11.195463    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:11.195480    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:11.195493    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:11.195501    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:11.195509    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:11.195516    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:11.195532    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:11.195549    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:11.195561    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:13.196135    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 21
	I1213 12:08:13.196151    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:13.196225    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:13.197221    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:13.197319    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:13.197348    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:13.197357    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:13.197362    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:13.197378    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:13.197394    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:13.197403    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:13.197408    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:13.197416    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:13.197423    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:13.197437    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:13.197447    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:13.197454    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:13.197469    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:13.197481    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:13.197493    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:13.197503    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:13.197508    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:13.197517    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:13.197524    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:15.199520    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 22
	I1213 12:08:15.199538    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:15.199596    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:15.200669    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:15.200798    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:15.200810    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:15.200818    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:15.200825    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:15.200835    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:15.200841    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:15.200848    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:15.200857    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:15.200866    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:15.200871    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:15.200878    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:15.200885    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:15.200892    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:15.200899    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:15.200905    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:15.200911    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:15.200920    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:15.200927    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:15.200933    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:15.200941    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:17.203047    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 23
	I1213 12:08:17.203059    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:17.203105    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:17.204088    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:17.204202    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:17.204210    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:17.204219    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:17.204225    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:17.204231    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:17.204237    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:17.204243    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:17.204248    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:17.204260    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:17.204275    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:17.204291    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:17.204299    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:17.204305    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:17.204311    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:17.204326    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:17.204338    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:17.204355    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:17.204366    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:17.204374    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:17.204382    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:19.206428    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 24
	I1213 12:08:19.206442    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:19.206500    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:19.207699    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:19.207776    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:19.207802    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:19.207808    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:19.207840    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:19.207845    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:19.207863    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:19.207870    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:19.207899    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:19.207913    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:19.207922    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:19.207931    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:19.207939    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:19.207947    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:19.207953    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:19.207961    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:19.207966    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:19.207974    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:19.207980    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:19.207988    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:19.208003    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:21.210077    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 25
	I1213 12:08:21.210098    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:21.210108    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:21.211091    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:21.211185    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:21.211195    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:21.211204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:21.211209    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:21.211215    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:21.211223    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:21.211230    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:21.211238    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:21.211256    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:21.211270    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:21.211278    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:21.211284    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:21.211290    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:21.211300    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:21.211307    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:21.211320    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:21.211328    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:21.211335    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:21.211342    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:21.211365    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:23.212005    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 26
	I1213 12:08:23.212017    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:23.212096    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:23.213135    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:23.213196    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:23.213208    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:23.213218    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:23.213226    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:23.213232    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:23.213238    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:23.213244    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:23.213250    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:23.213255    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:23.213262    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:23.213269    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:23.213276    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:23.213284    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:23.213305    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:23.213320    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:23.213345    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:23.213360    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:23.213369    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:23.213382    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:23.213390    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:25.214084    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 27
	I1213 12:08:25.214099    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:25.214153    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:25.215228    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:25.215319    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:25.215346    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:25.215372    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:25.215384    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:25.215399    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:25.215413    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:25.215421    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:25.215431    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:25.215450    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:25.215466    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:25.215480    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:25.215491    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:25.215505    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:25.215515    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:25.215523    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:25.215532    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:25.215548    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:25.215568    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:25.215583    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:25.215596    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:27.217523    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 28
	I1213 12:08:27.217538    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:27.217617    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:27.218614    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:27.218706    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:27.218714    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:27.218730    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:27.218739    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:27.218745    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:27.218755    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:27.218761    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:27.218767    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:27.218773    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:27.218780    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:27.218799    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:27.218811    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:27.218823    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:27.218829    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:27.218849    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:27.218860    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:27.218878    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:27.218889    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:27.218897    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:27.218905    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:29.218912    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Attempt 29
	I1213 12:08:29.218924    7413 main.go:141] libmachine: (offline-docker-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:29.218998    7413 main.go:141] libmachine: (offline-docker-990000) DBG | hyperkit pid from json: 7624
	I1213 12:08:29.220077    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Searching for a2:b1:35:67:82:21 in /var/db/dhcpd_leases ...
	I1213 12:08:29.220194    7413 main.go:141] libmachine: (offline-docker-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:29.220204    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:29.220211    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:29.220216    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:29.220231    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:29.220239    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:29.220254    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:29.220267    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:29.220284    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:29.220296    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:29.220304    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:29.220309    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:29.220316    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:29.220324    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:29.220332    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:29.220340    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:29.220357    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:29.220375    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:29.220383    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:29.220389    7413 main.go:141] libmachine: (offline-docker-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:31.220743    7413 client.go:171] duration metric: took 1m1.075557017s to LocalClient.Create
	I1213 12:08:33.222887    7413 start.go:128] duration metric: took 1m3.112158065s to createHost
	I1213 12:08:33.222922    7413 start.go:83] releasing machines lock for "offline-docker-990000", held for 1m3.112267063s
	W1213 12:08:33.223024    7413 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-990000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b1:35:67:82:21
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-990000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b1:35:67:82:21
	I1213 12:08:33.289251    7413 out.go:201] 
	W1213 12:08:33.310061    7413 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b1:35:67:82:21
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b1:35:67:82:21
	W1213 12:08:33.310071    7413 out.go:270] * 
	* 
	W1213 12:08:33.310758    7413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:08:33.371997    7413 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-990000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-13 12:08:33.492048 -0800 PST m=+3961.359926769
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-990000 -n offline-docker-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-990000 -n offline-docker-990000: exit status 7 (115.202819ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:08:33.604979    7644 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:08:33.605002    7644 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-990000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-990000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-990000: (5.297154468s)
--- FAIL: TestOffline (195.57s)

                                                
                                    
x
+
TestCertOptions (252s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-524000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E1213 12:14:19.640898    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:15:07.049026    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:15:34.763508    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-524000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.231906298s)

                                                
                                                
-- stdout --
	* [cert-options-524000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-524000" primary control-plane node in "cert-options-524000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-524000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:19:1e:77:e3:a3
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-524000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3d:94:26:72:22
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3d:94:26:72:22
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-524000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-524000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-524000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (180.898058ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-524000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-524000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-524000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-524000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-524000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (180.788368ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-524000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-524000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-524000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-13 12:18:01.318146 -0800 PST m=+4529.179928609
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-524000 -n cert-options-524000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-524000 -n cert-options-524000: exit status 7 (98.495632ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:18:01.414715    7908 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:18:01.414748    7908 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-524000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-524000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-524000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-524000: (5.258600418s)
--- FAIL: TestCertOptions (252.00s)

                                                
                                    
x
+
TestCertExpiration (1729.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1213 12:13:42.279048    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.732539581s)

                                                
                                                
-- stdout --
	* [cert-expiration-490000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-490000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:51:4a:68:d2:c7
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:dc:f5:88:ad:e4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:dc:f5:88:ad:e4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1213 12:20:05.355497    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:20:07.052621    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m37.109977357s)

                                                
                                                
-- stdout --
	* [cert-expiration-490000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-490000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-13 12:41:35.11706 -0800 PST m=+5942.910777420
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-490000 -n cert-expiration-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-490000 -n cert-expiration-490000: exit status 7 (100.508215ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:41:35.215299    9542 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:41:35.215324    9542 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-490000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-490000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-490000: (5.270844287s)
--- FAIL: TestCertExpiration (1729.22s)

                                                
                                    
x
+
TestDockerFlags (252.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-944000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E1213 12:10:07.046615    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.053274    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.066663    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.089228    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.131144    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.213590    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.376974    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:07.699127    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:08.341930    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:09.624395    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:12.187731    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:17.309433    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:27.552975    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:10:48.034605    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:11:28.996952    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-944000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.65590165s)

                                                
                                                
-- stdout --
	* [docker-flags-944000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-944000" primary control-plane node in "docker-flags-944000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-944000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:09:42.233476    7699 out.go:345] Setting OutFile to fd 1 ...
	I1213 12:09:42.233694    7699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:09:42.233700    7699 out.go:358] Setting ErrFile to fd 2...
	I1213 12:09:42.233703    7699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:09:42.233893    7699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 12:09:42.235541    7699 out.go:352] Setting JSON to false
	I1213 12:09:42.265623    7699 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4152,"bootTime":1734116430,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 12:09:42.265784    7699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 12:09:42.289454    7699 out.go:177] * [docker-flags-944000] minikube v1.34.0 on Darwin 15.1.1
	I1213 12:09:42.331774    7699 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 12:09:42.331879    7699 notify.go:220] Checking for updates...
	I1213 12:09:42.374531    7699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 12:09:42.401771    7699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 12:09:42.421086    7699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:09:42.441268    7699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:09:42.462332    7699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:09:42.483584    7699 config.go:182] Loaded profile config "force-systemd-flag-806000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 12:09:42.483673    7699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 12:09:42.515281    7699 out.go:177] * Using the hyperkit driver based on user configuration
	I1213 12:09:42.557106    7699 start.go:297] selected driver: hyperkit
	I1213 12:09:42.557122    7699 start.go:901] validating driver "hyperkit" against <nil>
	I1213 12:09:42.557142    7699 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:09:42.563005    7699 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:09:42.563148    7699 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 12:09:42.574560    7699 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 12:09:42.581460    7699 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:09:42.581489    7699 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 12:09:42.581524    7699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 12:09:42.581748    7699 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1213 12:09:42.581786    7699 cni.go:84] Creating CNI manager for ""
	I1213 12:09:42.581828    7699 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 12:09:42.581835    7699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:09:42.581911    7699 start.go:340] cluster config:
	{Name:docker-flags-944000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:09:42.582019    7699 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:09:42.624272    7699 out.go:177] * Starting "docker-flags-944000" primary control-plane node in "docker-flags-944000" cluster
	I1213 12:09:42.645102    7699 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 12:09:42.645140    7699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 12:09:42.645155    7699 cache.go:56] Caching tarball of preloaded images
	I1213 12:09:42.645275    7699 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 12:09:42.645285    7699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 12:09:42.645365    7699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/docker-flags-944000/config.json ...
	I1213 12:09:42.645382    7699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/docker-flags-944000/config.json: {Name:mk3333d80d40c52548f2580845ebd2c017c111b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:09:42.645718    7699 start.go:360] acquireMachinesLock for docker-flags-944000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:10:39.535079    7699 start.go:364] duration metric: took 56.888737091s to acquireMachinesLock for "docker-flags-944000"
	I1213 12:10:39.535119    7699 start.go:93] Provisioning new machine with config: &{Name:docker-flags-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:10:39.535185    7699 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:10:39.556430    7699 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:10:39.556608    7699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:10:39.556663    7699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:10:39.568567    7699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53860
	I1213 12:10:39.568944    7699 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:10:39.569426    7699 main.go:141] libmachine: Using API Version  1
	I1213 12:10:39.569438    7699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:10:39.569697    7699 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:10:39.569845    7699 main.go:141] libmachine: (docker-flags-944000) Calling .GetMachineName
	I1213 12:10:39.569953    7699 main.go:141] libmachine: (docker-flags-944000) Calling .DriverName
	I1213 12:10:39.570056    7699 start.go:159] libmachine.API.Create for "docker-flags-944000" (driver="hyperkit")
	I1213 12:10:39.570079    7699 client.go:168] LocalClient.Create starting
	I1213 12:10:39.570133    7699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:10:39.570194    7699 main.go:141] libmachine: Decoding PEM data...
	I1213 12:10:39.570218    7699 main.go:141] libmachine: Parsing certificate...
	I1213 12:10:39.570283    7699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:10:39.570337    7699 main.go:141] libmachine: Decoding PEM data...
	I1213 12:10:39.570349    7699 main.go:141] libmachine: Parsing certificate...
	I1213 12:10:39.570367    7699 main.go:141] libmachine: Running pre-create checks...
	I1213 12:10:39.570375    7699 main.go:141] libmachine: (docker-flags-944000) Calling .PreCreateCheck
	I1213 12:10:39.570459    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.570643    7699 main.go:141] libmachine: (docker-flags-944000) Calling .GetConfigRaw
	I1213 12:10:39.626172    7699 main.go:141] libmachine: Creating machine...
	I1213 12:10:39.626183    7699 main.go:141] libmachine: (docker-flags-944000) Calling .Create
	I1213 12:10:39.626279    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.626457    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:10:39.626273    7730 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:10:39.626518    7699 main.go:141] libmachine: (docker-flags-944000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:10:39.820991    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:10:39.820894    7730 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/id_rsa...
	I1213 12:10:39.883473    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:10:39.883436    7730 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk...
	I1213 12:10:39.883485    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Writing magic tar header
	I1213 12:10:39.883513    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Writing SSH key tar header
	I1213 12:10:39.884169    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:10:39.884124    7730 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000 ...
	I1213 12:10:40.273800    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:40.273819    7699 main.go:141] libmachine: (docker-flags-944000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid
	I1213 12:10:40.273858    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Using UUID 6686b71c-7721-4f0b-9919-78433c68924a
	I1213 12:10:40.297638    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Generated MAC 26:aa:3f:42:1d:c1
	I1213 12:10:40.297660    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000
	I1213 12:10:40.297699    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6686b71c-7721-4f0b-9919-78433c68924a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I1213 12:10:40.297728    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6686b71c-7721-4f0b-9919-78433c68924a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I1213 12:10:40.297793    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6686b71c-7721-4f0b-9919-78433c68924a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage,/Users/jenkins/minikub
e-integration/20090-800/.minikube/machines/docker-flags-944000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000"}
	I1213 12:10:40.297840    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6686b71c-7721-4f0b-9919-78433c68924a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000"
	I1213 12:10:40.297858    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:10:40.300878    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 DEBUG: hyperkit: Pid is 7731
	I1213 12:10:40.302176    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 0
	I1213 12:10:40.302191    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:40.302279    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:40.303449    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:40.303581    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:40.303600    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:40.303608    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:40.303615    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:40.303620    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:40.303631    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:40.303637    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:40.303644    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:40.303650    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:40.303656    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:40.303661    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:40.303679    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:40.303692    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:40.303704    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:40.303727    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:40.303739    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:40.303749    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:40.303756    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:40.303765    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:40.303770    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:40.311883    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:10:40.320362    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:10:40.321379    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:10:40.321409    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:10:40.321420    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:10:40.321436    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:10:40.704269    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:10:40.704284    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:10:40.818975    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:10:40.818994    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:10:40.819008    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:10:40.819025    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:10:40.819880    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:10:40.819892    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:10:42.304194    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 1
	I1213 12:10:42.304210    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:42.304297    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:42.305326    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:42.305400    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:42.305419    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:42.305430    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:42.305435    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:42.305442    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:42.305448    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:42.305454    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:42.305463    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:42.305469    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:42.305475    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:42.305481    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:42.305501    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:42.305512    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:42.305521    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:42.305530    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:42.305537    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:42.305545    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:42.305551    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:42.305557    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:42.305574    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:44.306750    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 2
	I1213 12:10:44.306764    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:44.306843    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:44.307842    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:44.307969    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:44.307981    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:44.307988    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:44.307994    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:44.308002    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:44.308007    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:44.308013    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:44.308023    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:44.308030    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:44.308037    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:44.308044    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:44.308055    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:44.308064    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:44.308072    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:44.308078    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:44.308084    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:44.308106    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:44.308119    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:44.308126    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:44.308134    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:46.175840    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:10:46.175923    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:10:46.175934    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:10:46.198063    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:10:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:10:46.308381    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 3
	I1213 12:10:46.308404    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:46.308698    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:46.310486    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:46.310711    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:46.310725    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:46.310734    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:46.310741    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:46.310749    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:46.310756    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:46.310779    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:46.310818    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:46.310831    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:46.310840    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:46.310852    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:46.310863    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:46.310874    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:46.310893    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:46.310909    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:46.310920    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:46.310930    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:46.310938    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:46.310946    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:46.310958    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:48.311487    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 4
	I1213 12:10:48.311504    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:48.311568    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:48.312613    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:48.312733    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:48.312746    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:48.312756    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:48.312766    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:48.312772    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:48.312779    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:48.312792    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:48.312806    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:48.312817    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:48.312824    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:48.312830    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:48.312839    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:48.312846    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:48.312851    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:48.312859    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:48.312865    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:48.312875    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:48.312889    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:48.312907    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:48.312914    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:50.313494    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 5
	I1213 12:10:50.313511    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:50.313565    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:50.314614    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:50.314712    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:50.314722    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:50.314729    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:50.314736    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:50.314759    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:50.314768    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:50.314774    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:50.314780    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:50.314786    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:50.314793    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:50.314798    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:50.314813    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:50.314829    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:50.314844    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:50.314858    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:50.314873    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:50.314881    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:50.314891    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:50.314900    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:50.314916    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:52.316460    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 6
	I1213 12:10:52.316472    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:52.316544    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:52.317533    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:52.317629    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:52.317639    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:52.317650    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:52.317656    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:52.317663    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:52.317669    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:52.317674    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:52.317680    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:52.317686    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:52.317691    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:52.317706    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:52.317717    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:52.317733    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:52.317743    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:52.317750    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:52.317757    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:52.317768    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:52.317776    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:52.317783    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:52.317789    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:54.319816    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 7
	I1213 12:10:54.319828    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:54.319900    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:54.320938    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:54.321025    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:54.321047    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:54.321059    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:54.321072    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:54.321080    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:54.321085    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:54.321091    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:54.321100    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:54.321110    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:54.321116    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:54.321122    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:54.321128    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:54.321137    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:54.321144    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:54.321152    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:54.321158    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:54.321173    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:54.321179    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:54.321185    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:54.321192    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:56.322070    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 8
	I1213 12:10:56.322086    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:56.322131    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:56.323233    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:56.323352    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:56.323360    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:56.323370    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:56.323376    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:56.323394    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:56.323405    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:56.323412    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:56.323418    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:56.323424    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:56.323431    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:56.323445    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:56.323457    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:56.323464    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:56.323472    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:56.323478    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:56.323486    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:56.323492    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:56.323499    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:56.323506    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:56.323513    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:58.325589    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 9
	I1213 12:10:58.325604    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:58.325657    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:10:58.326699    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:10:58.326779    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:58.326789    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:58.326797    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:58.326803    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:58.326809    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:58.326825    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:58.326842    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:58.326852    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:58.326861    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:58.326868    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:58.326874    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:58.326881    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:58.326887    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:58.326893    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:58.326899    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:58.326905    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:58.326912    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:58.326919    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:58.326926    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:58.326934    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:00.327784    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 10
	I1213 12:11:00.327799    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:00.327860    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:00.329260    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:00.329338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:00.329346    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:00.329354    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:00.329360    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:00.329376    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:00.329382    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:00.329392    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:00.329399    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:00.329413    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:00.329425    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:00.329439    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:00.329448    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:00.329462    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:00.329470    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:00.329477    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:00.329483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:00.329488    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:00.329496    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:00.329505    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:00.329513    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:02.329572    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 11
	I1213 12:11:02.329583    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:02.329623    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:02.330745    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:02.330799    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:02.330809    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:02.330820    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:02.330830    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:02.330838    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:02.330845    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:02.330859    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:02.330869    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:02.330877    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:02.330884    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:02.330890    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:02.330898    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:02.330906    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:02.330912    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:02.330919    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:02.330925    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:02.330931    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:02.330937    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:02.330944    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:02.330952    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:04.332947    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 12
	I1213 12:11:04.332966    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:04.332985    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:04.334050    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:04.334116    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:04.334123    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:04.334131    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:04.334136    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:04.334150    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:04.334156    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:04.334166    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:04.334175    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:04.334181    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:04.334187    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:04.334205    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:04.334219    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:04.334235    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:04.334247    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:04.334255    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:04.334263    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:04.334269    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:04.334277    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:04.334284    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:04.334291    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:06.335729    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 13
	I1213 12:11:06.335742    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:06.335808    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:06.336804    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:06.336905    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:06.336927    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:06.336948    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:06.336959    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:06.336973    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:06.336980    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:06.336986    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:06.336992    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:06.336998    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:06.337004    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:06.337018    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:06.337027    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:06.337033    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:06.337041    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:06.337053    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:06.337065    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:06.337077    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:06.337083    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:06.337091    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:06.337099    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:08.337094    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 14
	I1213 12:11:08.337108    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:08.337166    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:08.338198    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:08.338301    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:08.338310    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:08.338317    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:08.338323    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:08.338332    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:08.338339    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:08.338363    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:08.338378    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:08.338385    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:08.338394    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:08.338413    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:08.338425    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:08.338433    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:08.338440    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:08.338446    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:08.338454    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:08.338468    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:08.338483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:08.338493    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:08.338501    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:10.340587    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 15
	I1213 12:11:10.340602    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:10.340704    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:10.341692    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:10.341843    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:10.341854    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:10.341861    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:10.341887    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:10.341906    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:10.341919    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:10.341927    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:10.341935    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:10.341941    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:10.341969    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:10.341980    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:10.341988    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:10.341994    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:10.342008    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:10.342020    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:10.342028    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:10.342035    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:10.342044    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:10.342053    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:10.342068    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:12.342117    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 16
	I1213 12:11:12.342131    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:12.342207    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:12.343267    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:12.343372    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:12.343382    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:12.343388    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:12.343394    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:12.343401    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:12.343408    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:12.343414    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:12.343428    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:12.343434    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:12.343445    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:12.343453    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:12.343459    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:12.343466    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:12.343483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:12.343494    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:12.343513    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:12.343519    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:12.343538    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:12.343548    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:12.343554    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:14.345590    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 17
	I1213 12:11:14.345603    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:14.345664    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:14.346706    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:14.346803    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:14.346812    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:14.346845    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:14.346850    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:14.346860    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:14.346866    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:14.346872    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:14.346878    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:14.346891    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:14.346904    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:14.346922    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:14.346935    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:14.346950    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:14.346958    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:14.346967    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:14.346975    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:14.346981    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:14.346988    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:14.347008    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:14.347019    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:16.348352    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 18
	I1213 12:11:16.348368    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:16.348473    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:16.349473    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:16.349609    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:16.349622    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:16.349631    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:16.349643    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:16.349654    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:16.349661    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:16.349669    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:16.349678    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:16.349684    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:16.349692    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:16.349708    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:16.349721    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:16.349738    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:16.349750    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:16.349758    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:16.349767    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:16.349774    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:16.349781    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:16.349788    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:16.349795    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:18.351861    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 19
	I1213 12:11:18.351873    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:18.351950    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:18.352981    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:18.353071    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:18.353106    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:18.353114    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:18.353121    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:18.353128    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:18.353137    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:18.353144    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:18.353150    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:18.353157    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:18.353164    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:18.353174    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:18.353188    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:18.353198    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:18.353206    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:18.353212    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:18.353219    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:18.353230    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:18.353238    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:18.353245    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:18.353251    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:20.353705    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 20
	I1213 12:11:20.353720    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:20.353807    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:20.354798    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:20.354931    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:20.354941    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:20.354962    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:20.354972    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:20.354979    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:20.354985    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:20.354991    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:20.354999    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:20.355006    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:20.355024    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:20.355039    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:20.355050    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:20.355075    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:20.355104    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:20.355115    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:20.355123    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:20.355130    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:20.355136    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:20.355142    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:20.355148    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:22.357153    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 21
	I1213 12:11:22.357175    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:22.357218    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:22.358266    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:22.358340    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:22.358348    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:22.358357    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:22.358363    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:22.358369    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:22.358382    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:22.358412    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:22.358424    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:22.358433    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:22.358447    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:22.358463    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:22.358476    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:22.358483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:22.358489    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:22.358512    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:22.358522    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:22.358528    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:22.358541    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:22.358554    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:22.358566    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:24.360584    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 22
	I1213 12:11:24.360596    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:24.360660    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:24.361734    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:24.361823    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:24.361832    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:24.361839    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:24.361849    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:24.361864    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:24.361872    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:24.361887    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:24.361899    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:24.361906    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:24.361913    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:24.361919    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:24.361932    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:24.361944    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:24.361958    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:24.361969    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:24.361984    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:24.361995    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:24.362003    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:24.362010    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:24.362020    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:26.362783    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 23
	I1213 12:11:26.362796    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:26.362911    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:26.363947    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:26.364074    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:26.364083    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:26.364092    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:26.364107    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:26.364123    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:26.364138    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:26.364162    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:26.364177    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:26.364195    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:26.364201    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:26.364207    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:26.364219    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:26.364228    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:26.364236    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:26.364242    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:26.364250    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:26.364264    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:26.364277    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:26.364289    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:26.364299    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:28.366310    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 24
	I1213 12:11:28.366324    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:28.366374    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:28.367420    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:28.367551    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:28.367585    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:28.367590    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:28.367609    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:28.367616    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:28.367623    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:28.367631    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:28.367637    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:28.367645    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:28.367656    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:28.367664    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:28.367670    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:28.367678    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:28.367694    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:28.367707    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:28.367715    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:28.367721    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:28.367727    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:28.367734    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:28.367750    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:30.369780    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 25
	I1213 12:11:30.369795    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:30.369870    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:30.370874    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:30.370975    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:30.370988    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:30.370997    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:30.371004    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:30.371011    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:30.371018    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:30.371025    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:30.371031    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:30.371043    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:30.371050    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:30.371056    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:30.371062    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:30.371078    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:30.371087    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:30.371095    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:30.371102    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:30.371110    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:30.371116    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:30.371126    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:30.371134    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:32.372108    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 26
	I1213 12:11:32.372122    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:32.372226    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:32.373296    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:32.373425    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:32.373435    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:32.373442    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:32.373458    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:32.373465    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:32.373472    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:32.373484    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:32.373491    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:32.373498    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:32.373506    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:32.373525    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:32.373544    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:32.373556    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:32.373566    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:32.373574    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:32.373590    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:32.373601    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:32.373609    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:32.373615    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:32.373621    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:34.375604    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 27
	I1213 12:11:34.375620    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:34.375689    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:34.376682    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:34.376775    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:34.376785    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:34.376793    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:34.376798    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:34.376810    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:34.376817    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:34.376824    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:34.376832    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:34.376848    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:34.376859    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:34.376867    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:34.376873    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:34.376881    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:34.376889    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:34.376895    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:34.376902    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:34.376909    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:34.376916    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:34.376923    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:34.376930    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:36.377851    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 28
	I1213 12:11:36.377864    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:36.377929    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:36.378956    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:36.379052    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:36.379062    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:36.379071    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:36.379076    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:36.379099    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:36.379110    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:36.379120    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:36.379136    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:36.379148    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:36.379156    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:36.379162    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:36.379167    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:36.379188    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:36.379195    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:36.379202    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:36.379208    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:36.379226    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:36.379241    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:36.379254    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:36.379263    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:38.380209    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 29
	I1213 12:11:38.380221    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:38.380309    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:38.381327    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 26:aa:3f:42:1d:c1 in /var/db/dhcpd_leases ...
	I1213 12:11:38.381426    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:38.381460    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:38.381469    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:38.381483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:38.381490    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:38.381496    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:38.381503    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:38.381511    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:38.381519    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:38.381526    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:38.381532    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:38.381543    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:38.381550    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:38.381560    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:38.381568    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:38.381574    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:38.381582    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:38.381595    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:38.381609    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:38.381627    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:40.383868    7699 client.go:171] duration metric: took 1m0.813125276s to LocalClient.Create
	I1213 12:11:42.384272    7699 start.go:128] duration metric: took 1m2.84839884s to createHost
	I1213 12:11:42.384292    7699 start.go:83] releasing machines lock for "docker-flags-944000", held for 1m2.84852607s
	W1213 12:11:42.384308    7699 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:aa:3f:42:1d:c1
	I1213 12:11:42.384646    7699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:11:42.384663    7699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:11:42.396602    7699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53862
	I1213 12:11:42.397001    7699 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:11:42.397348    7699 main.go:141] libmachine: Using API Version  1
	I1213 12:11:42.397358    7699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:11:42.397574    7699 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:11:42.397934    7699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:11:42.397956    7699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:11:42.409722    7699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53864
	I1213 12:11:42.410038    7699 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:11:42.410436    7699 main.go:141] libmachine: Using API Version  1
	I1213 12:11:42.410460    7699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:11:42.410721    7699 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:11:42.410839    7699 main.go:141] libmachine: (docker-flags-944000) Calling .GetState
	I1213 12:11:42.410952    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.411011    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:42.412256    7699 main.go:141] libmachine: (docker-flags-944000) Calling .DriverName
	I1213 12:11:42.447621    7699 out.go:177] * Deleting "docker-flags-944000" in hyperkit ...
	I1213 12:11:42.489412    7699 main.go:141] libmachine: (docker-flags-944000) Calling .Remove
	I1213 12:11:42.489537    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.489548    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.489615    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:42.490820    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.490878    7699 main.go:141] libmachine: (docker-flags-944000) DBG | waiting for graceful shutdown
	I1213 12:11:43.491648    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:43.491787    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:43.493003    7699 main.go:141] libmachine: (docker-flags-944000) DBG | waiting for graceful shutdown
	I1213 12:11:44.493316    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:44.493402    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:44.494676    7699 main.go:141] libmachine: (docker-flags-944000) DBG | waiting for graceful shutdown
	I1213 12:11:45.496500    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:45.496577    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:45.497446    7699 main.go:141] libmachine: (docker-flags-944000) DBG | waiting for graceful shutdown
	I1213 12:11:46.498606    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:46.498690    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:46.499876    7699 main.go:141] libmachine: (docker-flags-944000) DBG | waiting for graceful shutdown
	I1213 12:11:47.500773    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:47.500827    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7731
	I1213 12:11:47.501562    7699 main.go:141] libmachine: (docker-flags-944000) DBG | sending sigkill
	I1213 12:11:47.501576    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:47.513070    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:11:47 WARN : hyperkit: failed to read stdout: EOF
	I1213 12:11:47.513097    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:11:47 WARN : hyperkit: failed to read stderr: EOF
	W1213 12:11:47.538021    7699 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:aa:3f:42:1d:c1
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:aa:3f:42:1d:c1
	I1213 12:11:47.538036    7699 start.go:729] Will try again in 5 seconds ...
	I1213 12:11:52.540189    7699 start.go:360] acquireMachinesLock for docker-flags-944000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:12:45.357779    7699 start.go:364] duration metric: took 52.816990014s to acquireMachinesLock for "docker-flags-944000"
	I1213 12:12:45.357806    7699 start.go:93] Provisioning new machine with config: &{Name:docker-flags-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:12:45.357862    7699 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:12:45.399974    7699 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:12:45.400084    7699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:12:45.400103    7699 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:12:45.411832    7699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53868
	I1213 12:12:45.412165    7699 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:12:45.412564    7699 main.go:141] libmachine: Using API Version  1
	I1213 12:12:45.412585    7699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:12:45.412789    7699 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:12:45.412896    7699 main.go:141] libmachine: (docker-flags-944000) Calling .GetMachineName
	I1213 12:12:45.412999    7699 main.go:141] libmachine: (docker-flags-944000) Calling .DriverName
	I1213 12:12:45.413105    7699 start.go:159] libmachine.API.Create for "docker-flags-944000" (driver="hyperkit")
	I1213 12:12:45.413119    7699 client.go:168] LocalClient.Create starting
	I1213 12:12:45.413146    7699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:12:45.413211    7699 main.go:141] libmachine: Decoding PEM data...
	I1213 12:12:45.413221    7699 main.go:141] libmachine: Parsing certificate...
	I1213 12:12:45.413267    7699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:12:45.413315    7699 main.go:141] libmachine: Decoding PEM data...
	I1213 12:12:45.413328    7699 main.go:141] libmachine: Parsing certificate...
	I1213 12:12:45.413341    7699 main.go:141] libmachine: Running pre-create checks...
	I1213 12:12:45.413347    7699 main.go:141] libmachine: (docker-flags-944000) Calling .PreCreateCheck
	I1213 12:12:45.413432    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:45.413473    7699 main.go:141] libmachine: (docker-flags-944000) Calling .GetConfigRaw
	I1213 12:12:45.421401    7699 main.go:141] libmachine: Creating machine...
	I1213 12:12:45.421414    7699 main.go:141] libmachine: (docker-flags-944000) Calling .Create
	I1213 12:12:45.421544    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:45.421706    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:12:45.421539    7768 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:12:45.421772    7699 main.go:141] libmachine: (docker-flags-944000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:12:45.829418    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:12:45.829329    7768 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/id_rsa...
	I1213 12:12:46.131953    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:12:46.131875    7768 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk...
	I1213 12:12:46.131964    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Writing magic tar header
	I1213 12:12:46.131973    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Writing SSH key tar header
	I1213 12:12:46.132347    7699 main.go:141] libmachine: (docker-flags-944000) DBG | I1213 12:12:46.132295    7768 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000 ...
	I1213 12:12:46.523093    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:46.523112    7699 main.go:141] libmachine: (docker-flags-944000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid
	I1213 12:12:46.523154    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Using UUID 9a5dff5e-a2ef-471a-bac5-25ee057f1c47
	I1213 12:12:46.545905    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Generated MAC 92:c2:e7:1d:06:5f
	I1213 12:12:46.545926    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000
	I1213 12:12:46.545963    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a5dff5e-a2ef-471a-bac5-25ee057f1c47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000a61b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I1213 12:12:46.545996    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9a5dff5e-a2ef-471a-bac5-25ee057f1c47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000a61b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I1213 12:12:46.546054    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9a5dff5e-a2ef-471a-bac5-25ee057f1c47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage,/Users/jenkins/minikub
e-integration/20090-800/.minikube/machines/docker-flags-944000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000"}
	I1213 12:12:46.546098    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9a5dff5e-a2ef-471a-bac5-25ee057f1c47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/docker-flags-944000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-944000"
	I1213 12:12:46.546158    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:12:46.549155    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 DEBUG: hyperkit: Pid is 7782
	I1213 12:12:46.549749    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 0
	I1213 12:12:46.549762    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:46.549866    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:46.551206    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:46.551285    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:46.551293    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:46.551301    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:46.551311    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:46.551323    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:46.551328    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:46.551338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:46.551347    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:46.551361    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:46.551370    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:46.551381    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:46.551392    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:46.551400    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:46.551407    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:46.551427    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:46.551440    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:46.551464    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:46.551475    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:46.551484    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:46.551492    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:46.559879    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:12:46.568472    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/docker-flags-944000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:12:46.569352    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:12:46.569384    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:12:46.569398    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:12:46.569412    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:12:46.951622    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:12:46.951637    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:12:47.066310    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:12:47.066326    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:12:47.066335    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:12:47.066355    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:12:47.067183    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:12:47.067195    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:47 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:12:48.551998    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 1
	I1213 12:12:48.552013    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:48.552084    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:48.553127    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:48.553224    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:48.553234    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:48.553241    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:48.553248    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:48.553255    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:48.553264    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:48.553271    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:48.553277    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:48.553284    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:48.553289    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:48.553302    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:48.553313    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:48.553324    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:48.553332    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:48.553338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:48.553345    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:48.553354    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:48.553373    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:48.553385    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:48.553392    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:50.555433    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 2
	I1213 12:12:50.555450    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:50.555519    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:50.556694    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:50.556780    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:50.556788    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:50.556801    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:50.556811    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:50.556822    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:50.556832    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:50.556839    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:50.556846    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:50.556859    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:50.556873    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:50.556884    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:50.556891    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:50.556900    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:50.556915    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:50.556932    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:50.556944    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:50.556952    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:50.556965    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:50.556978    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:50.556987    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:52.420197    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:52 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:12:52.420307    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:52 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:12:52.420315    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:52 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:12:52.440001    7699 main.go:141] libmachine: (docker-flags-944000) DBG | 2024/12/13 12:12:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:12:52.557455    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 3
	I1213 12:12:52.557483    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:52.557671    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:52.559503    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:52.559722    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:52.559743    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:52.559757    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:52.559768    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:52.559778    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:52.559785    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:52.559805    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:52.559823    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:52.559834    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:52.559857    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:52.559869    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:52.559880    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:52.559893    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:52.559904    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:52.559913    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:52.559924    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:52.559947    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:52.559962    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:52.559973    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:52.559981    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:54.560167    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 4
	I1213 12:12:54.560185    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:54.560274    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:54.561318    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:54.561435    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:54.561445    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:54.561452    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:54.561459    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:54.561465    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:54.561471    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:54.561479    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:54.561485    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:54.561497    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:54.561506    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:54.561524    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:54.561537    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:54.561551    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:54.561560    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:54.561567    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:54.561573    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:54.561579    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:54.561585    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:54.561593    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:54.561602    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:56.562594    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 5
	I1213 12:12:56.562609    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:56.562653    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:56.563735    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:56.563885    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:56.563917    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:56.563925    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:56.563938    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:56.563954    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:56.563966    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:56.563978    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:56.563994    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:56.564002    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:56.564009    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:56.564016    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:56.564022    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:56.564032    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:56.564039    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:56.564055    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:56.564068    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:56.564075    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:56.564096    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:56.564102    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:56.564124    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:58.565841    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 6
	I1213 12:12:58.565854    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:58.565913    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:12:58.566914    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:12:58.566988    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:58.567006    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:58.567014    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:58.567019    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:58.567025    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:58.567033    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:58.567041    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:58.567049    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:58.567056    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:58.567061    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:58.567067    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:58.567074    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:58.567081    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:58.567097    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:58.567112    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:58.567120    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:58.567128    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:58.567135    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:58.567142    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:58.567151    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:00.569146    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 7
	I1213 12:13:00.569159    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:00.569208    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:00.570205    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:00.570273    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:00.570298    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:00.570307    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:00.570328    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:00.570349    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:00.570358    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:00.570365    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:00.570372    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:00.570387    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:00.570398    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:00.570411    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:00.570419    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:00.570428    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:00.570453    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:00.570461    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:00.570468    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:00.570484    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:00.570494    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:00.570501    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:00.570509    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:02.570711    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 8
	I1213 12:13:02.570724    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:02.570774    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:02.571777    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:02.571843    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:02.571858    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:02.571865    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:02.571872    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:02.571879    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:02.571886    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:02.571910    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:02.571923    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:02.571932    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:02.571937    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:02.571944    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:02.571965    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:02.571984    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:02.571998    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:02.572009    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:02.572018    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:02.572032    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:02.572045    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:02.572058    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:02.572068    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:04.574079    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 9
	I1213 12:13:04.574094    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:04.574147    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:04.575175    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:04.575273    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:04.575283    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:04.575290    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:04.575312    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:04.575321    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:04.575327    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:04.575335    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:04.575342    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:04.575350    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:04.575363    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:04.575371    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:04.575377    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:04.575383    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:04.575388    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:04.575401    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:04.575409    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:04.575415    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:04.575433    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:04.575441    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:04.575449    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:06.575659    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 10
	I1213 12:13:06.575674    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:06.575723    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:06.576768    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:06.576846    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:06.576868    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:06.576899    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:06.576904    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:06.576910    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:06.576917    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:06.576923    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:06.576929    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:06.576942    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:06.576950    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:06.576968    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:06.576980    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:06.576990    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:06.576995    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:06.577002    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:06.577007    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:06.577019    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:06.577029    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:06.577035    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:06.577043    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:08.579111    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 11
	I1213 12:13:08.579125    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:08.579182    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:08.580226    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:08.580309    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:08.580332    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:08.580364    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:08.580369    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:08.580376    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:08.580388    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:08.580407    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:08.580418    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:08.580425    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:08.580438    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:08.580452    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:08.580461    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:08.580469    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:08.580484    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:08.580501    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:08.580509    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:08.580516    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:08.580531    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:08.580542    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:08.580554    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:10.582553    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 12
	I1213 12:13:10.582565    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:10.582618    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:10.583650    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:10.583777    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:10.583787    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:10.583796    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:10.583822    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:10.583842    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:10.583851    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:10.583858    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:10.583868    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:10.583875    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:10.583884    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:10.583893    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:10.583901    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:10.583912    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:10.583924    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:10.583939    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:10.583949    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:10.583963    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:10.583972    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:10.583979    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:10.583987    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:12.585446    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 13
	I1213 12:13:12.585460    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:12.585532    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:12.586555    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:12.586663    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:12.586698    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:12.586707    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:12.586712    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:12.586719    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:12.586724    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:12.586731    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:12.586736    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:12.586752    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:12.586763    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:12.586792    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:12.586825    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:12.586856    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:12.586864    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:12.586871    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:12.586877    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:12.586885    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:12.586900    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:12.586911    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:12.586929    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:14.588810    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 14
	I1213 12:13:14.588831    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:14.588869    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:14.589915    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:14.590043    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:14.590053    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:14.590060    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:14.590076    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:14.590093    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:14.590104    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:14.590112    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:14.590120    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:14.590134    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:14.590143    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:14.590150    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:14.590157    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:14.590164    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:14.590175    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:14.590182    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:14.590187    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:14.590203    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:14.590215    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:14.590224    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:14.590230    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:16.591578    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 15
	I1213 12:13:16.591592    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:16.591666    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:16.592732    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:16.592868    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:16.592879    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:16.592886    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:16.592916    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:16.592927    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:16.592936    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:16.592945    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:16.592953    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:16.592959    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:16.592966    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:16.592973    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:16.592980    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:16.592986    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:16.592993    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:16.593002    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:16.593010    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:16.593026    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:16.593038    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:16.593046    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:16.593053    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:18.595073    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 16
	I1213 12:13:18.595090    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:18.595140    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:18.596221    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:18.596417    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:18.596426    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:18.596433    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:18.596438    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:18.596444    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:18.596452    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:18.596458    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:18.596467    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:18.596473    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:18.596485    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:18.596500    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:18.596511    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:18.596528    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:18.596541    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:18.596550    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:18.596555    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:18.596573    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:18.596587    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:18.596603    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:18.596616    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:20.598556    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 17
	I1213 12:13:20.598567    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:20.598619    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:20.599737    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:20.599816    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:20.599825    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:20.599833    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:20.599838    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:20.599844    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:20.599849    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:20.599855    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:20.599860    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:20.599867    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:20.599876    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:20.599892    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:20.599915    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:20.599926    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:20.599932    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:20.599939    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:20.599947    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:20.599953    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:20.599959    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:20.599972    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:20.599984    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:22.600340    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 18
	I1213 12:13:22.600354    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:22.600460    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:22.601661    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:22.601813    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:22.601826    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:22.601840    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:22.601852    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:22.601860    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:22.601866    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:22.601875    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:22.601884    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:22.601891    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:22.601898    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:22.601905    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:22.601910    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:22.601926    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:22.601941    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:22.601953    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:22.601958    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:22.601964    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:22.601971    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:22.601979    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:22.601986    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:24.604052    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 19
	I1213 12:13:24.604063    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:24.604129    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:24.605120    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:24.605233    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:24.605242    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:24.605249    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:24.605254    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:24.605284    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:24.605293    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:24.605300    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:24.605306    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:24.605313    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:24.605321    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:24.605328    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:24.605334    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:24.605341    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:24.605348    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:24.605362    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:24.605374    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:24.605382    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:24.605397    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:24.605404    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:24.605412    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:26.606683    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 20
	I1213 12:13:26.606698    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:26.606754    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:26.607794    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:26.607926    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:26.607956    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:26.607964    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:26.607970    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:26.607985    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:26.607992    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:26.608001    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:26.608013    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:26.608021    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:26.608026    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:26.608042    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:26.608055    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:26.608063    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:26.608069    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:26.608075    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:26.608088    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:26.608095    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:26.608102    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:26.608109    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:26.608116    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:28.608623    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 21
	I1213 12:13:28.608638    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:28.608761    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:28.609776    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:28.609884    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:28.609892    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:28.609908    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:28.609915    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:28.609921    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:28.609927    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:28.609953    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:28.609962    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:28.609984    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:28.610000    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:28.610015    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:28.610024    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:28.610031    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:28.610038    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:28.610045    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:28.610060    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:28.610070    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:28.610077    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:28.610083    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:28.610103    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:30.611009    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 22
	I1213 12:13:30.611028    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:30.611088    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:30.612116    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:30.612288    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:30.612298    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:30.612316    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:30.612330    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:30.612338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:30.612346    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:30.612353    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:30.612358    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:30.612369    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:30.612380    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:30.612391    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:30.612399    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:30.612412    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:30.612431    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:30.612441    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:30.612448    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:30.612455    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:30.612461    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:30.612468    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:30.612475    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:32.614005    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 23
	I1213 12:13:32.614018    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:32.614070    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:32.615115    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:32.615196    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:32.615205    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:32.615213    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:32.615218    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:32.615224    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:32.615229    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:32.615235    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:32.615250    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:32.615259    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:32.615265    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:32.615272    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:32.615278    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:32.615285    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:32.615292    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:32.615298    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:32.615304    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:32.615316    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:32.615328    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:32.615338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:32.615345    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:34.617389    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 24
	I1213 12:13:34.617404    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:34.617512    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:34.618533    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:34.618638    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:34.618648    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:34.618658    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:34.618667    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:34.618676    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:34.618690    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:34.618703    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:34.618716    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:34.618738    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:34.618750    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:34.618765    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:34.618772    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:34.618780    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:34.618787    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:34.618795    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:34.618801    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:34.618808    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:34.618816    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:34.618824    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:34.618830    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:36.620857    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 25
	I1213 12:13:36.620870    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:36.620949    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:36.622055    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:36.622155    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:36.622171    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:36.622180    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:36.622189    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:36.622209    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:36.622228    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:36.622243    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:36.622254    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:36.622263    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:36.622270    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:36.622278    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:36.622296    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:36.622309    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:36.622318    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:36.622326    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:36.622336    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:36.622345    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:36.622357    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:36.622373    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:36.622387    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:38.624336    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 26
	I1213 12:13:38.624348    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:38.624421    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:38.625579    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:38.625688    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:38.625700    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:38.625709    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:38.625716    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:38.625722    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:38.625728    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:38.625739    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:38.625745    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:38.625752    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:38.625778    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:38.625789    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:38.625797    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:38.625804    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:38.625820    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:38.625831    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:38.625838    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:38.625846    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:38.625852    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:38.625859    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:38.625868    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:40.627855    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 27
	I1213 12:13:40.627869    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:40.627921    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:40.628921    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:40.629026    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:40.629065    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:40.629073    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:40.629083    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:40.629100    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:40.629112    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:40.629123    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:40.629131    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:40.629140    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:40.629146    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:40.629160    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:40.629174    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:40.629190    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:40.629197    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:40.629206    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:40.629216    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:40.629223    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:40.629230    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:40.629237    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:40.629249    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:42.631018    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 28
	I1213 12:13:42.631038    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:42.631137    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:42.632183    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:42.632280    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:42.632291    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:42.632301    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:42.632306    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:42.632312    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:42.632323    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:42.632329    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:42.632338    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:42.632355    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:42.632369    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:42.632377    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:42.632384    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:42.632397    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:42.632409    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:42.632427    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:42.632439    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:42.632469    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:42.632482    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:42.632489    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:42.632496    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:44.632841    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Attempt 29
	I1213 12:13:44.632854    7699 main.go:141] libmachine: (docker-flags-944000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:13:44.632927    7699 main.go:141] libmachine: (docker-flags-944000) DBG | hyperkit pid from json: 7782
	I1213 12:13:44.633909    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Searching for 92:c2:e7:1d:06:5f in /var/db/dhcpd_leases ...
	I1213 12:13:44.634006    7699 main.go:141] libmachine: (docker-flags-944000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:13:44.634016    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:13:44.634024    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:13:44.634036    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:13:44.634046    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:13:44.634055    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:13:44.634063    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:13:44.634085    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:13:44.634097    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:13:44.634106    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:13:44.634113    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:13:44.634121    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:13:44.634128    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:13:44.634133    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:13:44.634140    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:13:44.634147    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:13:44.634155    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:13:44.634163    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:13:44.634177    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:13:44.634188    7699 main.go:141] libmachine: (docker-flags-944000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:13:46.635471    7699 client.go:171] duration metric: took 1m1.22168621s to LocalClient.Create
	I1213 12:13:48.637594    7699 start.go:128] duration metric: took 1m3.279036893s to createHost
	I1213 12:13:48.637629    7699 start.go:83] releasing machines lock for "docker-flags-944000", held for 1m3.279157734s
	W1213 12:13:48.637717    7699 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-944000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:c2:e7:1d:06:5f
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-944000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:c2:e7:1d:06:5f
	I1213 12:13:48.699786    7699 out.go:201] 
	W1213 12:13:48.720913    7699 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:c2:e7:1d:06:5f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:c2:e7:1d:06:5f
	W1213 12:13:48.720925    7699 out.go:270] * 
	* 
	W1213 12:13:48.721577    7699 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:13:48.782861    7699 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-944000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-944000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-944000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (203.976879ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-944000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-944000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-944000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-944000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (196.641137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-944000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-944000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-944000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-13 12:13:49.296305 -0800 PST m=+4277.160793528
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-944000 -n docker-flags-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-944000 -n docker-flags-944000: exit status 7 (102.831031ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:13:49.396646    7811 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:13:49.396667    7811 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-944000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-944000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-944000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-944000: (5.271020575s)
--- FAIL: TestDockerFlags (252.50s)

                                                
                                    
x
+
TestForceSystemdFlag (252.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-806000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E1213 12:08:42.275474    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:09:19.636927    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-806000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.619306588s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-806000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-806000" primary control-plane node in "force-systemd-flag-806000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-806000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:08:38.971636    7656 out.go:345] Setting OutFile to fd 1 ...
	I1213 12:08:38.971865    7656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:08:38.971871    7656 out.go:358] Setting ErrFile to fd 2...
	I1213 12:08:38.971874    7656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:08:38.972048    7656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 12:08:38.973617    7656 out.go:352] Setting JSON to false
	I1213 12:08:39.003433    7656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4088,"bootTime":1734116430,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 12:08:39.003603    7656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 12:08:39.027406    7656 out.go:177] * [force-systemd-flag-806000] minikube v1.34.0 on Darwin 15.1.1
	I1213 12:08:39.069602    7656 notify.go:220] Checking for updates...
	I1213 12:08:39.090284    7656 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 12:08:39.111479    7656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 12:08:39.132279    7656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 12:08:39.153315    7656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:08:39.174052    7656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:08:39.194451    7656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:08:39.215707    7656 config.go:182] Loaded profile config "force-systemd-env-990000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 12:08:39.215802    7656 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 12:08:39.248293    7656 out.go:177] * Using the hyperkit driver based on user configuration
	I1213 12:08:39.290307    7656 start.go:297] selected driver: hyperkit
	I1213 12:08:39.290325    7656 start.go:901] validating driver "hyperkit" against <nil>
	I1213 12:08:39.290337    7656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:08:39.296275    7656 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:08:39.296412    7656 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 12:08:39.308153    7656 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 12:08:39.315016    7656 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:08:39.315038    7656 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 12:08:39.315073    7656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 12:08:39.315318    7656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 12:08:39.315347    7656 cni.go:84] Creating CNI manager for ""
	I1213 12:08:39.315398    7656 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 12:08:39.315407    7656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:08:39.315462    7656 start.go:340] cluster config:
	{Name:force-systemd-flag-806000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:08:39.315551    7656 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:08:39.357325    7656 out.go:177] * Starting "force-systemd-flag-806000" primary control-plane node in "force-systemd-flag-806000" cluster
	I1213 12:08:39.378316    7656 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 12:08:39.378353    7656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 12:08:39.378370    7656 cache.go:56] Caching tarball of preloaded images
	I1213 12:08:39.378494    7656 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 12:08:39.378504    7656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 12:08:39.378584    7656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/force-systemd-flag-806000/config.json ...
	I1213 12:08:39.378602    7656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/force-systemd-flag-806000/config.json: {Name:mk5d7758d1d9ba77afb7a51d8991820690f09b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:08:39.379008    7656 start.go:360] acquireMachinesLock for force-systemd-flag-806000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:09:36.289417    7656 start.go:364] duration metric: took 56.909779674s to acquireMachinesLock for "force-systemd-flag-806000"
	I1213 12:09:36.289468    7656 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:09:36.289526    7656 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:09:36.331570    7656 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:09:36.331739    7656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:09:36.331778    7656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:09:36.343648    7656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53840
	I1213 12:09:36.344102    7656 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:09:36.344698    7656 main.go:141] libmachine: Using API Version  1
	I1213 12:09:36.344709    7656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:09:36.345169    7656 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:09:36.345318    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .GetMachineName
	I1213 12:09:36.345422    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .DriverName
	I1213 12:09:36.345547    7656 start.go:159] libmachine.API.Create for "force-systemd-flag-806000" (driver="hyperkit")
	I1213 12:09:36.345610    7656 client.go:168] LocalClient.Create starting
	I1213 12:09:36.345642    7656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:09:36.345731    7656 main.go:141] libmachine: Decoding PEM data...
	I1213 12:09:36.345780    7656 main.go:141] libmachine: Parsing certificate...
	I1213 12:09:36.345838    7656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:09:36.345885    7656 main.go:141] libmachine: Decoding PEM data...
	I1213 12:09:36.345896    7656 main.go:141] libmachine: Parsing certificate...
	I1213 12:09:36.345912    7656 main.go:141] libmachine: Running pre-create checks...
	I1213 12:09:36.345921    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .PreCreateCheck
	I1213 12:09:36.345997    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:36.346162    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .GetConfigRaw
	I1213 12:09:36.373558    7656 main.go:141] libmachine: Creating machine...
	I1213 12:09:36.373570    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .Create
	I1213 12:09:36.373673    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:36.373849    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:09:36.373666    7684 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:09:36.373894    7656 main.go:141] libmachine: (force-systemd-flag-806000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:09:36.801852    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:09:36.801760    7684 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/id_rsa...
	I1213 12:09:36.989604    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:09:36.989516    7684 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk...
	I1213 12:09:36.989630    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Writing magic tar header
	I1213 12:09:36.989643    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Writing SSH key tar header
	I1213 12:09:36.989997    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:09:36.989946    7684 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000 ...
	I1213 12:09:37.422161    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:37.422179    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid
	I1213 12:09:37.422234    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Using UUID 60aeadbb-63d2-4d9c-8e48-1d35b92bf2b8
	I1213 12:09:37.449275    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Generated MAC 7e:ca:7f:c0:37:38
	I1213 12:09:37.449293    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000
	I1213 12:09:37.449331    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"60aeadbb-63d2-4d9c-8e48-1d35b92bf2b8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:09:37.449365    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"60aeadbb-63d2-4d9c-8e48-1d35b92bf2b8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:09:37.449417    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "60aeadbb-63d2-4d9c-8e48-1d35b92bf2b8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-sy
stemd-flag-806000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000"}
	I1213 12:09:37.449466    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 60aeadbb-63d2-4d9c-8e48-1d35b92bf2b8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage,/Users/jenkins/minikube-integration/
20090-800/.minikube/machines/force-systemd-flag-806000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000"
	I1213 12:09:37.449480    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:09:37.452584    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 DEBUG: hyperkit: Pid is 7698
	I1213 12:09:37.453149    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 0
	I1213 12:09:37.453162    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:37.453239    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:37.454371    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:37.454498    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:37.454515    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:37.454537    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:37.454552    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:37.454574    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:37.454589    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:37.454601    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:37.454614    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:37.454626    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:37.454659    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:37.454674    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:37.454682    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:37.454689    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:37.454696    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:37.454723    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:37.454736    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:37.454748    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:37.454759    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:37.454768    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:37.454776    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:37.463506    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:09:37.471969    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:09:37.472925    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:09:37.472948    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:09:37.472969    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:09:37.473001    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:09:37.853206    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:09:37.853221    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:09:37.967789    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:09:37.967810    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:09:37.967820    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:09:37.967831    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:09:37.968689    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:09:37.968702    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:09:39.455606    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 1
	I1213 12:09:39.455627    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:39.455676    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:39.456736    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:39.456836    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:39.456849    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:39.456871    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:39.456895    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:39.456917    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:39.456929    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:39.456938    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:39.456946    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:39.456953    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:39.456961    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:39.456976    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:39.456989    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:39.457004    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:39.457015    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:39.457025    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:39.457034    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:39.457044    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:39.457051    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:39.457061    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:39.457071    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:41.457670    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 2
	I1213 12:09:41.457686    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:41.457832    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:41.458956    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:41.459038    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:41.459049    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:41.459064    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:41.459071    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:41.459078    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:41.459083    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:41.459091    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:41.459098    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:41.459105    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:41.459111    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:41.459123    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:41.459132    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:41.459149    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:41.459161    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:41.459169    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:41.459177    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:41.459184    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:41.459191    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:41.459198    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:41.459206    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:43.300161    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:09:43.300267    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:09:43.300278    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:09:43.322871    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:09:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:09:43.460971    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 3
	I1213 12:09:43.460989    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:43.461169    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:43.462508    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:43.462781    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:43.462795    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:43.462820    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:43.462831    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:43.462840    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:43.462852    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:43.462874    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:43.462903    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:43.462921    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:43.462938    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:43.462950    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:43.462958    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:43.462968    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:43.462977    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:43.462988    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:43.463010    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:43.463022    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:43.463042    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:43.463059    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:43.463074    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:45.463689    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 4
	I1213 12:09:45.463706    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:45.463765    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:45.464821    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:45.464912    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:45.464920    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:45.464928    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:45.464934    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:45.464956    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:45.464967    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:45.464975    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:45.464982    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:45.464988    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:45.464993    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:45.465001    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:45.465010    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:45.465017    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:45.465023    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:45.465029    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:45.465037    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:45.465052    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:45.465063    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:45.465071    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:45.465079    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:47.465641    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 5
	I1213 12:09:47.465654    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:47.465734    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:47.466735    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:47.466863    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:47.466871    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:47.466879    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:47.466889    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:47.466907    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:47.466921    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:47.466945    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:47.466954    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:47.466974    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:47.466986    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:47.467003    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:47.467015    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:47.467023    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:47.467032    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:47.467047    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:47.467056    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:47.467066    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:47.467076    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:47.467085    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:47.467093    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:49.468136    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 6
	I1213 12:09:49.468152    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:49.468243    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:49.469235    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:49.469334    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:49.469345    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:49.469353    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:49.469359    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:49.469369    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:49.469380    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:49.469386    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:49.469393    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:49.469398    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:49.469424    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:49.469433    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:49.469449    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:49.469457    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:49.469467    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:49.469478    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:49.469485    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:49.469493    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:49.469507    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:49.469519    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:49.469528    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:51.471547    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 7
	I1213 12:09:51.471561    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:51.471633    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:51.472711    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:51.472786    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:51.472807    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:51.472823    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:51.472842    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:51.472855    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:51.472863    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:51.472872    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:51.472879    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:51.472886    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:51.472893    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:51.472905    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:51.472918    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:51.472926    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:51.472933    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:51.472938    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:51.472956    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:51.472967    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:51.472977    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:51.472984    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:51.473002    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:53.473214    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 8
	I1213 12:09:53.473225    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:53.473283    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:53.474342    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:53.474429    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:53.474437    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:53.474444    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:53.474449    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:53.474455    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:53.474460    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:53.474466    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:53.474471    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:53.474492    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:53.474505    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:53.474513    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:53.474521    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:53.474546    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:53.474554    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:53.474561    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:53.474566    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:53.474573    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:53.474579    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:53.474589    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:53.474607    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:55.475935    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 9
	I1213 12:09:55.475952    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:55.476083    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:55.477050    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:55.477148    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:55.477160    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:55.477171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:55.477179    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:55.477185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:55.477191    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:55.477198    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:55.477205    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:55.477222    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:55.477229    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:55.477244    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:55.477255    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:55.477276    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:55.477284    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:55.477295    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:55.477303    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:55.477310    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:55.477316    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:55.477323    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:55.477330    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:57.478388    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 10
	I1213 12:09:57.478402    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:57.478471    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:57.479477    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:57.479575    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:57.479584    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:57.479600    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:57.479614    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:57.479624    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:57.479629    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:57.479638    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:57.479654    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:57.479664    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:57.479672    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:57.479687    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:57.479695    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:57.479705    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:57.479715    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:57.479725    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:57.479733    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:57.479740    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:57.479748    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:57.479755    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:57.479763    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:59.480864    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 11
	I1213 12:09:59.480877    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:59.480949    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:09:59.481973    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:09:59.482096    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:59.482104    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:59.482112    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:59.482118    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:59.482132    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:59.482144    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:59.482151    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:59.482157    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:59.482169    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:59.482191    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:59.482201    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:59.482208    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:59.482218    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:59.482225    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:59.482233    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:59.482240    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:59.482249    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:59.482259    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:59.482268    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:59.482276    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:01.484300    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 12
	I1213 12:10:01.484313    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:01.484391    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:01.485450    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:01.485539    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:01.485550    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:01.485556    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:01.485562    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:01.485570    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:01.485578    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:01.485584    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:01.485590    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:01.485597    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:01.485603    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:01.485609    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:01.485615    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:01.485621    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:01.485644    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:01.485657    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:01.485679    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:01.485691    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:01.485715    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:01.485728    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:01.485742    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:03.487861    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 13
	I1213 12:10:03.487877    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:03.487942    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:03.488961    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:03.489099    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:03.489110    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:03.489119    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:03.489126    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:03.489133    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:03.489140    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:03.489148    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:03.489158    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:03.489165    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:03.489174    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:03.489180    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:03.489191    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:03.489199    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:03.489206    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:03.489214    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:03.489222    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:03.489230    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:03.489237    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:03.489242    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:03.489265    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:05.490704    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 14
	I1213 12:10:05.490742    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:05.490797    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:05.491800    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:05.491911    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:05.491920    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:05.491927    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:05.491933    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:05.491951    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:05.491963    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:05.491971    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:05.491980    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:05.491991    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:05.492010    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:05.492025    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:05.492034    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:05.492040    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:05.492061    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:05.492071    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:05.492079    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:05.492087    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:05.492092    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:05.492108    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:05.492120    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:07.492215    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 15
	I1213 12:10:07.492230    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:07.492284    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:07.493311    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:07.493444    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:07.493475    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:07.493483    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:07.493489    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:07.493495    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:07.493501    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:07.493519    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:07.493525    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:07.493531    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:07.493539    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:07.493556    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:07.493586    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:07.493615    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:07.493622    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:07.493628    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:07.493637    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:07.493643    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:07.493650    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:07.493662    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:07.493675    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:09.495273    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 16
	I1213 12:10:09.495286    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:09.495329    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:09.496335    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:09.496419    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:09.496430    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:09.496437    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:09.496442    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:09.496450    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:09.496456    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:09.496473    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:09.496484    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:09.496492    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:09.496498    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:09.496521    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:09.496530    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:09.496544    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:09.496552    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:09.496560    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:09.496567    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:09.496574    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:09.496579    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:09.496586    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:09.496600    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:11.497682    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 17
	I1213 12:10:11.497695    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:11.497778    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:11.498780    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:11.498867    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:11.498881    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:11.498891    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:11.498897    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:11.498914    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:11.498928    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:11.498936    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:11.498945    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:11.498952    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:11.498960    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:11.498980    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:11.498988    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:11.498995    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:11.499003    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:11.499009    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:11.499014    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:11.499021    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:11.499028    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:11.499035    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:11.499043    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:13.500880    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 18
	I1213 12:10:13.500895    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:13.500965    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:13.502000    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:13.502127    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:13.502138    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:13.502151    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:13.502157    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:13.502171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:13.502185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:13.502202    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:13.502210    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:13.502235    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:13.502250    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:13.502267    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:13.502281    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:13.502298    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:13.502310    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:13.502318    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:13.502326    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:13.502335    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:13.502343    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:13.502349    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:13.502357    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:15.502375    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 19
	I1213 12:10:15.502389    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:15.502445    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:15.503457    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:15.503564    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:15.503574    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:15.503590    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:15.503600    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:15.503609    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:15.503617    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:15.503623    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:15.503629    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:15.503642    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:15.503648    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:15.503656    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:15.503675    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:15.503687    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:15.503703    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:15.503716    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:15.503724    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:15.503732    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:15.503754    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:15.503767    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:15.503775    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:17.505462    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 20
	I1213 12:10:17.505477    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:17.505524    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:17.506571    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:17.506657    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:17.506665    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:17.506674    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:17.506680    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:17.506686    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:17.506693    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:17.506700    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:17.506736    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:17.506749    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:17.506760    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:17.506768    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:17.506783    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:17.506796    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:17.506804    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:17.506813    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:17.506820    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:17.506827    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:17.506834    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:17.506840    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:17.506849    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:19.508840    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 21
	I1213 12:10:19.508858    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:19.508931    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:19.509931    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:19.510010    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:19.510019    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:19.510036    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:19.510045    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:19.510051    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:19.510057    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:19.510063    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:19.510069    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:19.510085    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:19.510099    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:19.510107    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:19.510115    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:19.510136    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:19.510148    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:19.510156    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:19.510166    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:19.510173    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:19.510179    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:19.510189    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:19.510201    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:21.510741    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 22
	I1213 12:10:21.510756    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:21.510815    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:21.511848    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:21.511917    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:21.511927    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:21.511933    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:21.511945    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:21.511957    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:21.511963    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:21.511968    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:21.511989    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:21.512000    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:21.512010    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:21.512020    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:21.512030    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:21.512041    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:21.512054    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:21.512060    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:21.512067    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:21.512074    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:21.512081    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:21.512092    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:21.512109    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:23.512506    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 23
	I1213 12:10:23.512519    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:23.512620    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:23.513888    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:23.513986    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:23.514018    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:23.514036    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:23.514046    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:23.514053    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:23.514059    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:23.514065    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:23.514072    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:23.514078    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:23.514083    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:23.514096    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:23.514109    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:23.514118    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:23.514125    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:23.514131    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:23.514140    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:23.514153    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:23.514161    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:23.514177    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:23.514190    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:25.514493    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 24
	I1213 12:10:25.514506    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:25.514581    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:25.515590    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:25.515719    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:25.515752    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:25.515758    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:25.515765    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:25.515770    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:25.515777    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:25.515786    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:25.515799    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:25.515816    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:25.515824    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:25.515832    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:25.515839    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:25.515845    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:25.515852    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:25.515860    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:25.515878    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:25.515889    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:25.515897    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:25.515902    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:25.515918    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:27.517931    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 25
	I1213 12:10:27.517945    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:27.518016    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:27.519029    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:27.519132    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:27.519173    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:27.519182    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:27.519187    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:27.519202    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:27.519216    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:27.519227    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:27.519238    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:27.519245    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:27.519252    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:27.519262    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:27.519277    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:27.519285    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:27.519294    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:27.519314    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:27.519326    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:27.519343    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:27.519355    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:27.519363    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:27.519369    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:29.521387    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 26
	I1213 12:10:29.521402    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:29.521449    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:29.522521    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:29.522619    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:29.522655    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:29.522661    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:29.522669    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:29.522676    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:29.522682    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:29.522688    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:29.522708    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:29.522721    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:29.522739    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:29.522751    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:29.522759    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:29.522772    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:29.522785    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:29.522793    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:29.522800    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:29.522807    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:29.522815    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:29.522822    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:29.522831    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:31.524850    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 27
	I1213 12:10:31.524865    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:31.524928    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:31.525934    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:31.526061    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:31.526072    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:31.526102    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:31.526114    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:31.526127    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:31.526136    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:31.526143    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:31.526150    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:31.526156    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:31.526162    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:31.526169    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:31.526177    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:31.526184    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:31.526192    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:31.526207    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:31.526219    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:31.526226    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:31.526232    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:31.526246    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:31.526257    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:33.527042    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 28
	I1213 12:10:33.527057    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:33.527133    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:33.528180    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:33.528262    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:33.528271    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:33.528283    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:33.528291    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:33.528297    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:33.528303    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:33.528309    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:33.528317    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:33.528331    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:33.528346    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:33.528355    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:33.528363    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:33.528388    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:33.528400    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:33.528417    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:33.528430    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:33.528440    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:33.528447    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:33.528458    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:33.528465    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:35.529026    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 29
	I1213 12:10:35.529041    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:35.529164    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:35.530475    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 7e:ca:7f:c0:37:38 in /var/db/dhcpd_leases ...
	I1213 12:10:35.530566    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:10:35.530585    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:10:35.530604    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:10:35.530614    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:10:35.530621    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:10:35.530628    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:10:35.530634    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:10:35.530648    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:10:35.530665    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:10:35.530677    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:10:35.530686    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:10:35.530694    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:10:35.530701    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:10:35.530707    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:10:35.530713    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:10:35.530720    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:10:35.530728    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:10:35.530736    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:10:35.530742    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:10:35.530756    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:10:37.532889    7656 client.go:171] duration metric: took 1m1.186604618s to LocalClient.Create
	I1213 12:10:39.534993    7656 start.go:128] duration metric: took 1m3.244777501s to createHost
	I1213 12:10:39.535032    7656 start.go:83] releasing machines lock for "force-systemd-flag-806000", held for 1m3.244922593s
	W1213 12:10:39.535061    7656 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:ca:7f:c0:37:38
	I1213 12:10:39.535439    7656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:10:39.535459    7656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:10:39.547351    7656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53856
	I1213 12:10:39.547787    7656 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:10:39.548341    7656 main.go:141] libmachine: Using API Version  1
	I1213 12:10:39.548360    7656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:10:39.548618    7656 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:10:39.549059    7656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:10:39.549094    7656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:10:39.561178    7656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53858
	I1213 12:10:39.561564    7656 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:10:39.561923    7656 main.go:141] libmachine: Using API Version  1
	I1213 12:10:39.561936    7656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:10:39.562151    7656 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:10:39.562264    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .GetState
	I1213 12:10:39.562379    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.562478    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:39.563693    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .DriverName
	I1213 12:10:39.584401    7656 out.go:177] * Deleting "force-systemd-flag-806000" in hyperkit ...
	I1213 12:10:39.626161    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .Remove
	I1213 12:10:39.626300    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.626317    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.626386    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:39.627539    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:39.627594    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | waiting for graceful shutdown
	I1213 12:10:40.629676    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:40.629793    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:40.631008    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | waiting for graceful shutdown
	I1213 12:10:41.631483    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:41.631554    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:41.632905    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | waiting for graceful shutdown
	I1213 12:10:42.633805    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:42.633973    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:42.634795    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | waiting for graceful shutdown
	I1213 12:10:43.634895    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:43.635027    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:43.636273    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | waiting for graceful shutdown
	I1213 12:10:44.637129    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:44.637196    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7698
	I1213 12:10:44.637973    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | sending sigkill
	I1213 12:10:44.637983    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:10:44.649994    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:10:44 WARN : hyperkit: failed to read stderr: EOF
	I1213 12:10:44.650039    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:10:44 WARN : hyperkit: failed to read stdout: EOF
	W1213 12:10:44.670183    7656 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:ca:7f:c0:37:38
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:ca:7f:c0:37:38
	I1213 12:10:44.670197    7656 start.go:729] Will try again in 5 seconds ...
	I1213 12:10:49.671612    7656 start.go:360] acquireMachinesLock for force-systemd-flag-806000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:11:42.384353    7656 start.go:364] duration metric: took 52.712144378s to acquireMachinesLock for "force-systemd-flag-806000"
	I1213 12:11:42.384381    7656 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:11:42.384447    7656 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:11:42.405832    7656 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:11:42.405911    7656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:11:42.405926    7656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:11:42.417533    7656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53866
	I1213 12:11:42.417944    7656 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:11:42.418323    7656 main.go:141] libmachine: Using API Version  1
	I1213 12:11:42.418365    7656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:11:42.418590    7656 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:11:42.418694    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .GetMachineName
	I1213 12:11:42.418803    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .DriverName
	I1213 12:11:42.418938    7656 start.go:159] libmachine.API.Create for "force-systemd-flag-806000" (driver="hyperkit")
	I1213 12:11:42.418953    7656 client.go:168] LocalClient.Create starting
	I1213 12:11:42.418979    7656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:11:42.419039    7656 main.go:141] libmachine: Decoding PEM data...
	I1213 12:11:42.419058    7656 main.go:141] libmachine: Parsing certificate...
	I1213 12:11:42.419098    7656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:11:42.419144    7656 main.go:141] libmachine: Decoding PEM data...
	I1213 12:11:42.419152    7656 main.go:141] libmachine: Parsing certificate...
	I1213 12:11:42.419172    7656 main.go:141] libmachine: Running pre-create checks...
	I1213 12:11:42.419177    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .PreCreateCheck
	I1213 12:11:42.419284    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.419314    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .GetConfigRaw
	I1213 12:11:42.468702    7656 main.go:141] libmachine: Creating machine...
	I1213 12:11:42.468725    7656 main.go:141] libmachine: (force-systemd-flag-806000) Calling .Create
	I1213 12:11:42.468857    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:42.469070    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:11:42.468855    7745 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:11:42.469156    7656 main.go:141] libmachine: (force-systemd-flag-806000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:11:42.683519    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:11:42.683423    7745 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/id_rsa...
	I1213 12:11:42.848369    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:11:42.848302    7745 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk...
	I1213 12:11:42.848383    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Writing magic tar header
	I1213 12:11:42.848395    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Writing SSH key tar header
	I1213 12:11:42.849033    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | I1213 12:11:42.848974    7745 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000 ...
	I1213 12:11:43.239251    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:43.239271    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid
	I1213 12:11:43.239285    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Using UUID 7a117430-c2dd-4b0b-8366-df7f731dc670
	I1213 12:11:43.263438    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Generated MAC 4e:c0:8d:6e:40:df
	I1213 12:11:43.263456    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000
	I1213 12:11:43.263487    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a117430-c2dd-4b0b-8366-df7f731dc670", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:11:43.263512    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a117430-c2dd-4b0b-8366-df7f731dc670", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:11:43.263601    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a117430-c2dd-4b0b-8366-df7f731dc670", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-sy
stemd-flag-806000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000"}
	I1213 12:11:43.263647    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a117430-c2dd-4b0b-8366-df7f731dc670 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/force-systemd-flag-806000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/bzimage,/Users/jenkins/minikube-integration/
20090-800/.minikube/machines/force-systemd-flag-806000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-806000"
	I1213 12:11:43.263669    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:11:43.266753    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 DEBUG: hyperkit: Pid is 7746
	I1213 12:11:43.267271    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 0
	I1213 12:11:43.267286    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:43.267392    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:43.268813    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:43.268947    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:43.268962    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:43.268977    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:43.268984    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:43.269011    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:43.269023    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:43.269034    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:43.269043    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:43.269049    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:43.269068    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:43.269081    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:43.269095    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:43.269107    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:43.269117    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:43.269146    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:43.269163    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:43.269176    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:43.269186    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:43.269221    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:43.269242    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:43.277400    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:11:43.285865    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-flag-806000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:11:43.286829    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:11:43.286840    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:11:43.286853    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:11:43.286864    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:11:43.671256    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:11:43.671273    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:11:43.785831    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:11:43.785853    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:11:43.785875    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:11:43.785890    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:11:43.786757    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:11:43.786770    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:11:45.271230    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 1
	I1213 12:11:45.271247    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:45.271256    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:45.272460    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:45.272534    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:45.272546    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:45.272556    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:45.272570    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:45.272577    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:45.272583    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:45.272590    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:45.272596    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:45.272604    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:45.272614    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:45.272621    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:45.272626    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:45.272652    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:45.272673    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:45.272683    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:45.272691    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:45.272698    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:45.272703    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:45.272717    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:45.272729    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:47.274759    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 2
	I1213 12:11:47.274778    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:47.274847    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:47.275851    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:47.275958    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:47.275973    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:47.275980    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:47.275989    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:47.276016    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:47.276025    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:47.276032    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:47.276039    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:47.276048    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:47.276055    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:47.276069    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:47.276091    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:47.276103    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:47.276115    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:47.276131    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:47.276143    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:47.276151    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:47.276156    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:47.276171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:47.276185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:49.159104    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:49 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:11:49.159159    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:11:49.159169    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:11:49.179121    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | 2024/12/13 12:11:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:11:49.277756    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 3
	I1213 12:11:49.277779    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:49.278024    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:49.279845    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:49.280033    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:49.280047    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:49.280057    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:49.280065    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:49.280073    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:49.280081    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:49.280100    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:49.280114    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:49.280125    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:49.280146    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:49.280175    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:49.280186    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:49.280199    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:49.280210    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:49.280219    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:49.280227    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:49.280236    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:49.280246    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:49.280267    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:49.280280    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:51.280271    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 4
	I1213 12:11:51.280288    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:51.280348    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:51.281357    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:51.281451    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:51.281461    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:51.281469    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:51.281481    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:51.281490    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:51.281500    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:51.281507    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:51.281523    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:51.281537    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:51.281552    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:51.281569    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:51.281580    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:51.281588    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:51.281597    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:51.281612    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:51.281626    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:51.281638    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:51.281647    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:51.281654    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:51.281660    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:53.283730    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 5
	I1213 12:11:53.283741    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:53.283801    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:53.284940    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:53.285065    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:53.285074    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:53.285082    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:53.285087    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:53.285093    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:53.285105    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:53.285112    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:53.285121    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:53.285130    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:53.285135    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:53.285143    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:53.285148    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:53.285156    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:53.285173    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:53.285189    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:53.285201    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:53.285209    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:53.285233    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:53.285245    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:53.285260    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:55.285303    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 6
	I1213 12:11:55.285316    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:55.285377    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:55.286372    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:55.286515    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:55.286526    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:55.286536    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:55.286542    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:55.286551    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:55.286557    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:55.286571    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:55.286584    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:55.286591    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:55.286597    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:55.286603    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:55.286612    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:55.286630    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:55.286642    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:55.286661    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:55.286669    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:55.286676    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:55.286681    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:55.286687    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:55.286696    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:57.287961    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 7
	I1213 12:11:57.287975    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:57.288092    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:57.289396    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:57.289511    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:57.289520    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:57.289527    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:57.289532    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:57.289542    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:57.289548    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:57.289564    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:57.289572    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:57.289588    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:57.289601    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:57.289608    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:57.289616    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:57.289633    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:57.289641    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:57.289648    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:57.289655    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:57.289664    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:57.289674    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:57.289682    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:57.289691    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:11:59.291643    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 8
	I1213 12:11:59.291665    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:11:59.291699    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:11:59.292690    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:11:59.292764    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:11:59.292776    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:11:59.292788    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:11:59.292796    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:11:59.292804    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:11:59.292813    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:11:59.292819    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:11:59.292826    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:11:59.292834    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:11:59.292840    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:11:59.292846    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:11:59.292854    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:11:59.292862    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:11:59.292870    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:11:59.292886    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:11:59.292900    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:11:59.292923    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:11:59.292935    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:11:59.292943    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:11:59.292955    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:01.294980    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 9
	I1213 12:12:01.294995    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:01.295057    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:01.296105    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:01.296196    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:01.296226    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:01.296238    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:01.296244    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:01.296250    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:01.296256    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:01.296273    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:01.296290    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:01.296308    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:01.296327    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:01.296338    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:01.296347    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:01.296355    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:01.296363    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:01.296369    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:01.296376    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:01.296400    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:01.296412    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:01.296421    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:01.296429    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:03.297860    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 10
	I1213 12:12:03.297876    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:03.297924    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:03.298948    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:03.299091    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:03.299111    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:03.299130    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:03.299144    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:03.299160    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:03.299171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:03.299179    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:03.299186    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:03.299197    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:03.299206    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:03.299213    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:03.299222    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:03.299229    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:03.299237    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:03.299244    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:03.299251    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:03.299265    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:03.299277    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:03.299297    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:03.299311    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:05.301323    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 11
	I1213 12:12:05.301339    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:05.301393    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:05.302467    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:05.302582    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:05.302592    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:05.302600    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:05.302606    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:05.302623    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:05.302634    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:05.302643    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:05.302649    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:05.302662    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:05.302675    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:05.302684    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:05.302691    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:05.302698    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:05.302716    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:05.302733    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:05.302742    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:05.302749    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:05.302754    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:05.302761    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:05.302769    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:07.304401    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 12
	I1213 12:12:07.304413    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:07.304452    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:07.305488    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:07.305619    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:07.305638    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:07.305647    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:07.305655    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:07.305666    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:07.305681    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:07.305693    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:07.305701    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:07.305709    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:07.305719    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:07.305727    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:07.305734    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:07.305741    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:07.305755    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:07.305767    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:07.305863    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:07.305926    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:07.305934    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:07.305941    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:07.305949    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:09.307826    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 13
	I1213 12:12:09.307843    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:09.307893    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:09.309086    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:09.309183    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:09.309210    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:09.309222    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:09.309227    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:09.309234    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:09.309239    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:09.309245    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:09.309263    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:09.309271    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:09.309277    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:09.309301    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:09.309315    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:09.309338    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:09.309346    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:09.309352    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:09.309360    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:09.309366    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:09.309373    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:09.309381    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:09.309390    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:11.311410    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 14
	I1213 12:12:11.311429    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:11.311477    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:11.312475    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:11.312593    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:11.312601    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:11.312609    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:11.312619    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:11.312625    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:11.312632    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:11.312638    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:11.312646    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:11.312656    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:11.312664    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:11.312679    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:11.312694    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:11.312701    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:11.312709    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:11.312715    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:11.312723    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:11.312729    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:11.312735    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:11.312741    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:11.312748    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:13.314763    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 15
	I1213 12:12:13.314775    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:13.314853    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:13.315868    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:13.315960    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:13.315976    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:13.315985    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:13.315990    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:13.315996    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:13.316002    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:13.316021    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:13.316034    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:13.316042    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:13.316063    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:13.316075    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:13.316084    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:13.316091    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:13.316099    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:13.316109    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:13.316114    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:13.316131    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:13.316143    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:13.316151    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:13.316160    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:15.317511    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 16
	I1213 12:12:15.317526    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:15.317561    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:15.318639    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:15.318706    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:15.318714    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:15.318722    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:15.318728    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:15.318748    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:15.318755    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:15.318763    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:15.318771    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:15.318779    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:15.318785    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:15.318806    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:15.318818    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:15.318827    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:15.318835    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:15.318842    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:15.318847    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:15.318854    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:15.318862    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:15.318868    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:15.318873    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:17.319118    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 17
	I1213 12:12:17.319130    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:17.319186    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:17.320227    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:17.320360    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:17.320370    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:17.320377    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:17.320387    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:17.320404    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:17.320415    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:17.320424    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:17.320442    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:17.320457    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:17.320469    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:17.320483    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:17.320495    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:17.320503    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:17.320511    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:17.320518    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:17.320525    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:17.320533    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:17.320538    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:17.320550    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:17.320562    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:19.320649    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 18
	I1213 12:12:19.320661    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:19.320730    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:19.321778    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:19.321894    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:19.321903    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:19.321920    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:19.321930    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:19.321940    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:19.321945    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:19.321953    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:19.321958    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:19.321994    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:19.322007    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:19.322020    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:19.322029    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:19.322035    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:19.322042    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:19.322048    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:19.322054    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:19.322066    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:19.322073    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:19.322079    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:19.322086    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:21.324102    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 19
	I1213 12:12:21.324117    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:21.324185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:21.325262    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:21.325362    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:21.325398    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:21.325405    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:21.325413    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:21.325419    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:21.325425    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:21.325431    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:21.325438    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:21.325449    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:21.325459    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:21.325466    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:21.325483    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:21.325496    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:21.325504    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:21.325512    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:21.325518    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:21.325525    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:21.325531    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:21.325546    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:21.325555    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:23.326719    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 20
	I1213 12:12:23.326735    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:23.326798    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:23.328207    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:23.328298    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:23.328306    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:23.328314    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:23.328321    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:23.328331    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:23.328337    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:23.328343    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:23.328369    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:23.328388    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:23.328400    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:23.328410    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:23.328418    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:23.328432    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:23.328447    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:23.328456    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:23.328463    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:23.328475    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:23.328485    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:23.328496    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:23.328504    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:25.330487    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 21
	I1213 12:12:25.330501    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:25.330560    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:25.331549    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:25.331652    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:25.331663    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:25.331689    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:25.331702    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:25.331713    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:25.331720    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:25.331728    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:25.331735    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:25.331745    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:25.331755    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:25.331762    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:25.331768    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:25.331779    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:25.331791    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:25.331799    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:25.331809    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:25.331818    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:25.331827    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:25.331833    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:25.331843    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:27.332562    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 22
	I1213 12:12:27.332581    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:27.332647    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:27.333887    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:27.334020    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:27.334029    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:27.334036    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:27.334042    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:27.334051    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:27.334058    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:27.334077    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:27.334143    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:27.334165    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:27.334178    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:27.334186    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:27.334194    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:27.334212    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:27.334225    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:27.334235    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:27.334246    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:27.334264    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:27.334274    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:27.334282    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:27.334290    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:29.336244    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 23
	I1213 12:12:29.336259    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:29.336334    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:29.337370    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:29.337464    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:29.337486    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:29.337518    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:29.337524    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:29.337530    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:29.337540    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:29.337553    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:29.337562    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:29.337568    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:29.337576    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:29.337586    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:29.337594    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:29.337600    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:29.337612    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:29.337618    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:29.337625    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:29.337635    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:29.337640    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:29.337647    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:29.337654    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:31.339741    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 24
	I1213 12:12:31.339755    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:31.339791    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:31.340931    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:31.341050    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:31.341061    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:31.341067    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:31.341082    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:31.341099    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:31.341108    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:31.341114    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:31.341123    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:31.341130    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:31.341138    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:31.341146    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:31.341152    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:31.341171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:31.341179    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:31.341185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:31.341196    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:31.341207    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:31.341215    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:31.341222    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:31.341229    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:33.342921    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 25
	I1213 12:12:33.342937    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:33.342999    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:33.344016    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:33.344147    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:33.344177    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:33.344185    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:33.344201    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:33.344208    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:33.344214    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:33.344221    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:33.344227    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:33.344246    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:33.344257    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:33.344272    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:33.344292    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:33.344299    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:33.344308    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:33.344314    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:33.344319    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:33.344329    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:33.344334    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:33.344340    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:33.344347    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:35.344473    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 26
	I1213 12:12:35.344490    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:35.344538    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:35.345562    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:35.345651    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:35.345660    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:35.345668    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:35.345677    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:35.345683    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:35.345689    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:35.345695    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:35.345706    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:35.345717    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:35.345723    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:35.345731    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:35.345739    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:35.345745    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:35.345751    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:35.345758    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:35.345767    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:35.345781    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:35.345790    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:35.345798    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:35.345804    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:37.347849    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 27
	I1213 12:12:37.347862    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:37.347923    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:37.348949    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:37.349046    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:37.349102    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:37.349120    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:37.349131    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:37.349138    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:37.349145    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:37.349155    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:37.349163    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:37.349171    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:37.349176    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:37.349182    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:37.349192    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:37.349201    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:37.349208    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:37.349213    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:37.349234    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:37.349246    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:37.349270    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:37.349282    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:37.349297    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:39.350070    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 28
	I1213 12:12:39.350539    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:39.350550    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:39.351164    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:39.351256    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:39.351265    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:39.351276    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:39.351284    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:39.351291    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:39.351300    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:39.351306    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:39.351317    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:39.351326    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:39.351336    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:39.351418    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:39.351446    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:39.351458    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:39.351465    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:39.351474    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:39.351483    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:39.351490    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:39.351498    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:39.351508    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:39.351522    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:41.352103    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Attempt 29
	I1213 12:12:41.352120    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:12:41.352192    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | hyperkit pid from json: 7746
	I1213 12:12:41.353229    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Searching for 4e:c0:8d:6e:40:df in /var/db/dhcpd_leases ...
	I1213 12:12:41.353307    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:12:41.353329    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:12:41.353349    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:12:41.353360    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:12:41.353367    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:12:41.353376    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:12:41.353383    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:12:41.353390    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:12:41.353397    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:12:41.353403    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:12:41.353433    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:12:41.353449    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:12:41.353465    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:12:41.353480    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:12:41.353488    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:12:41.353497    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:12:41.353504    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:12:41.353510    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:12:41.353522    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:12:41.353532    7656 main.go:141] libmachine: (force-systemd-flag-806000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:12:43.355559    7656 client.go:171] duration metric: took 1m0.935941857s to LocalClient.Create
	I1213 12:12:45.357676    7656 start.go:128] duration metric: took 1m2.972545031s to createHost
	I1213 12:12:45.357706    7656 start.go:83] releasing machines lock for "force-systemd-flag-806000", held for 1m2.97265117s
	W1213 12:12:45.357814    7656 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-806000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:c0:8d:6e:40:df
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-806000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:c0:8d:6e:40:df
	I1213 12:12:45.399975    7656 out.go:201] 
	W1213 12:12:45.421013    7656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:c0:8d:6e:40:df
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:c0:8d:6e:40:df
	W1213 12:12:45.421027    7656 out.go:270] * 
	* 
	W1213 12:12:45.421627    7656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:12:45.482923    7656 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-806000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-806000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-806000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (227.902972ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-806000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-806000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-13 12:12:45.829197 -0800 PST m=+4213.694366516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-806000 -n force-systemd-flag-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-806000 -n force-systemd-flag-806000: exit status 7 (102.377018ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:12:45.929377    7773 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:12:45.929407    7773 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-806000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-806000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-806000
E1213 12:12:50.919958    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-806000: (5.269537425s)
--- FAIL: TestForceSystemdFlag (252.29s)

                                                
                                    
x
+
TestForceSystemdEnv (233.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-990000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-990000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m47.518867161s)

                                                
                                                
-- stdout --
	* [force-systemd-env-990000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-990000" primary control-plane node in "force-systemd-env-990000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-990000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:05:48.912882    7577 out.go:345] Setting OutFile to fd 1 ...
	I1213 12:05:48.913198    7577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:48.913203    7577 out.go:358] Setting ErrFile to fd 2...
	I1213 12:05:48.913207    7577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:48.913392    7577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 12:05:48.915001    7577 out.go:352] Setting JSON to false
	I1213 12:05:48.944168    7577 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3918,"bootTime":1734116430,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 12:05:48.944313    7577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 12:05:48.969103    7577 out.go:177] * [force-systemd-env-990000] minikube v1.34.0 on Darwin 15.1.1
	I1213 12:05:49.013864    7577 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 12:05:49.013904    7577 notify.go:220] Checking for updates...
	I1213 12:05:49.056752    7577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 12:05:49.077726    7577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 12:05:49.098518    7577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:05:49.118730    7577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:05:49.139708    7577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1213 12:05:49.160966    7577 config.go:182] Loaded profile config "offline-docker-990000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 12:05:49.161046    7577 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 12:05:49.192691    7577 out.go:177] * Using the hyperkit driver based on user configuration
	I1213 12:05:49.234515    7577 start.go:297] selected driver: hyperkit
	I1213 12:05:49.234528    7577 start.go:901] validating driver "hyperkit" against <nil>
	I1213 12:05:49.234554    7577 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:05:49.240218    7577 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:05:49.240351    7577 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 12:05:49.251535    7577 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 12:05:49.258376    7577 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:05:49.258395    7577 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 12:05:49.258429    7577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 12:05:49.258675    7577 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 12:05:49.258703    7577 cni.go:84] Creating CNI manager for ""
	I1213 12:05:49.258740    7577 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 12:05:49.258746    7577 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:05:49.258812    7577 start.go:340] cluster config:
	{Name:force-systemd-env-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:05:49.258894    7577 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:05:49.279710    7577 out.go:177] * Starting "force-systemd-env-990000" primary control-plane node in "force-systemd-env-990000" cluster
	I1213 12:05:49.321499    7577 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 12:05:49.321523    7577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 12:05:49.321535    7577 cache.go:56] Caching tarball of preloaded images
	I1213 12:05:49.321657    7577 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 12:05:49.321666    7577 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 12:05:49.321735    7577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/force-systemd-env-990000/config.json ...
	I1213 12:05:49.321752    7577 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/force-systemd-env-990000/config.json: {Name:mkf82d8c4177f84ca6acf82edf478c83c8c2ff29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:05:49.322195    7577 start.go:360] acquireMachinesLock for force-systemd-env-990000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:06:27.107135    7577 start.go:364] duration metric: took 37.78536748s to acquireMachinesLock for "force-systemd-env-990000"
	I1213 12:06:27.107199    7577 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:06:27.107257    7577 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:06:27.128722    7577 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:06:27.128909    7577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:06:27.128952    7577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:06:27.140756    7577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53820
	I1213 12:06:27.141146    7577 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:06:27.141550    7577 main.go:141] libmachine: Using API Version  1
	I1213 12:06:27.141562    7577 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:06:27.141793    7577 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:06:27.141895    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .GetMachineName
	I1213 12:06:27.142005    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .DriverName
	I1213 12:06:27.142103    7577 start.go:159] libmachine.API.Create for "force-systemd-env-990000" (driver="hyperkit")
	I1213 12:06:27.142129    7577 client.go:168] LocalClient.Create starting
	I1213 12:06:27.142160    7577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:06:27.142223    7577 main.go:141] libmachine: Decoding PEM data...
	I1213 12:06:27.142239    7577 main.go:141] libmachine: Parsing certificate...
	I1213 12:06:27.142294    7577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:06:27.142342    7577 main.go:141] libmachine: Decoding PEM data...
	I1213 12:06:27.142354    7577 main.go:141] libmachine: Parsing certificate...
	I1213 12:06:27.142375    7577 main.go:141] libmachine: Running pre-create checks...
	I1213 12:06:27.142381    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .PreCreateCheck
	I1213 12:06:27.142452    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.142627    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .GetConfigRaw
	I1213 12:06:27.176546    7577 main.go:141] libmachine: Creating machine...
	I1213 12:06:27.176558    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .Create
	I1213 12:06:27.176658    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.176834    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:06:27.176645    7596 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:06:27.176868    7577 main.go:141] libmachine: (force-systemd-env-990000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:06:27.386893    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:06:27.386786    7596 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/id_rsa...
	I1213 12:06:27.473329    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:06:27.473254    7596 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk...
	I1213 12:06:27.473342    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Writing magic tar header
	I1213 12:06:27.473351    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Writing SSH key tar header
	I1213 12:06:27.473948    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:06:27.473892    7596 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000 ...
	I1213 12:06:27.865896    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.865923    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid
	I1213 12:06:27.865933    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Using UUID 83cf65e3-eede-422a-8806-3b488492221f
	I1213 12:06:27.889898    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Generated MAC 8e:83:7a:c4:96:d4
	I1213 12:06:27.889919    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000
	I1213 12:06:27.889964    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83cf65e3-eede-422a-8806-3b488492221f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:06:27.889995    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83cf65e3-eede-422a-8806-3b488492221f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:06:27.890038    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "83cf65e3-eede-422a-8806-3b488492221f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-e
nv-990000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000"}
	I1213 12:06:27.890078    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 83cf65e3-eede-422a-8806-3b488492221f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage,/Users/jenkins/minikube-integration/20090-80
0/.minikube/machines/force-systemd-env-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000"
	I1213 12:06:27.890115    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:06:27.893222    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 DEBUG: hyperkit: Pid is 7597
	I1213 12:06:27.894306    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 0
	I1213 12:06:27.894319    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:27.894412    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:27.895586    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:27.895657    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:27.895669    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:27.895684    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:27.895694    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:27.895706    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:27.895715    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:27.895724    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:27.895731    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:27.895754    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:27.895769    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:27.895779    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:27.895786    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:27.895793    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:27.895801    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:27.895808    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:27.895814    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:27.895829    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:27.895843    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:27.895850    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:27.895860    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:27.904145    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:06:27.912708    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:06:27.913611    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:06:27.913632    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:06:27.913644    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:06:27.913669    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:06:28.297516    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:06:28.297529    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:06:28.412388    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:06:28.412410    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:06:28.412421    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:06:28.412435    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:06:28.413276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:06:28.413287    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:06:29.896306    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 1
	I1213 12:06:29.896322    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:29.896350    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:29.897412    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:29.897529    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:29.897544    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:29.897557    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:29.897566    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:29.897579    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:29.897594    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:29.897610    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:29.897619    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:29.897626    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:29.897633    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:29.897642    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:29.897650    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:29.897663    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:29.897686    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:29.897701    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:29.897712    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:29.897720    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:29.897728    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:29.897738    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:29.897747    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:31.898690    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 2
	I1213 12:06:31.898728    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:31.898816    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:31.899874    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:31.899947    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:31.899964    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:31.899980    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:31.900001    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:31.900021    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:31.900035    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:31.900043    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:31.900049    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:31.900055    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:31.900063    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:31.900069    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:31.900085    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:31.900099    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:31.900112    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:31.900121    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:31.900129    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:31.900134    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:31.900140    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:31.900147    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:31.900153    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:33.774774    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:06:33.774874    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:06:33.774885    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:06:33.794497    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:06:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:06:33.902281    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 3
	I1213 12:06:33.902312    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:33.902526    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:33.904339    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:33.904539    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:33.904553    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:33.904563    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:33.904577    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:33.904588    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:33.904598    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:33.904607    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:33.904624    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:33.904634    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:33.904643    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:33.904674    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:33.904690    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:33.904704    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:33.904712    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:33.904722    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:33.904729    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:33.904740    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:33.904751    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:33.904761    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:33.904768    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:35.904676    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 4
	I1213 12:06:35.904693    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:35.904779    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:35.905830    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:35.905900    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:35.905912    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:35.905925    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:35.905935    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:35.905942    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:35.905978    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:35.905989    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:35.905997    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:35.906004    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:35.906019    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:35.906038    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:35.906046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:35.906060    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:35.906072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:35.906079    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:35.906088    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:35.906108    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:35.906117    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:35.906123    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:35.906131    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:37.907249    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 5
	I1213 12:06:37.907265    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:37.907288    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:37.908289    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:37.908384    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:37.908396    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:37.908404    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:37.908428    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:37.908437    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:37.908448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:37.908458    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:37.908466    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:37.908475    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:37.908483    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:37.908489    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:37.908495    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:37.908503    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:37.908515    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:37.908524    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:37.908531    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:37.908549    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:37.908562    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:37.908575    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:37.908585    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:39.908793    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 6
	I1213 12:06:39.908812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:39.908886    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:39.909948    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:39.910046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:39.910057    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:39.910070    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:39.910083    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:39.910093    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:39.910100    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:39.910117    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:39.910136    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:39.910160    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:39.910179    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:39.910205    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:39.910228    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:39.910236    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:39.910242    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:39.910250    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:39.910257    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:39.910263    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:39.910276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:39.910288    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:39.910298    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:41.910474    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 7
	I1213 12:06:41.910489    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:41.910565    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:41.911764    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:41.911840    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:41.911853    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:41.911862    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:41.911874    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:41.911883    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:41.911888    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:41.911912    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:41.911924    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:41.911983    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:41.912000    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:41.912007    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:41.912014    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:41.912021    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:41.912028    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:41.912042    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:41.912056    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:41.912064    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:41.912072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:41.912079    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:41.912094    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:43.913970    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 8
	I1213 12:06:43.914050    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:43.914064    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:43.915032    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:43.915120    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:43.915133    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:43.915141    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:43.915148    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:43.915156    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:43.915162    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:43.915170    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:43.915177    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:43.915183    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:43.915190    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:43.915197    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:43.915205    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:43.915211    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:43.915218    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:43.915234    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:43.915248    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:43.915256    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:43.915264    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:43.915271    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:43.915279    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:45.915893    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 9
	I1213 12:06:45.915908    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:45.915977    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:45.916974    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:45.917076    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:45.917086    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:45.917104    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:45.917114    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:45.917121    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:45.917127    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:45.917134    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:45.917141    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:45.917147    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:45.917156    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:45.917163    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:45.917171    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:45.917187    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:45.917199    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:45.917207    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:45.917212    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:45.917219    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:45.917226    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:45.917233    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:45.917240    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:47.919275    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 10
	I1213 12:06:47.919289    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:47.919325    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:47.920328    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:47.920411    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:47.920422    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:47.920433    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:47.920443    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:47.920459    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:47.920465    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:47.920471    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:47.920477    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:47.920491    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:47.920501    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:47.920511    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:47.920519    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:47.920534    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:47.920542    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:47.920548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:47.920557    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:47.920568    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:47.920576    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:47.920583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:47.920596    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:49.921219    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 11
	I1213 12:06:49.921232    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:49.921305    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:49.922590    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:49.922718    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:49.922728    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:49.922737    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:49.922742    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:49.922762    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:49.922776    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:49.922785    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:49.922792    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:49.922799    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:49.922804    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:49.922812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:49.922821    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:49.922838    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:49.922846    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:49.922853    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:49.922860    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:49.922866    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:49.922872    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:49.922879    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:49.922888    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:51.923163    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 12
	I1213 12:06:51.923175    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:51.923245    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:51.924271    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:51.924356    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:51.924366    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:51.924374    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:51.924380    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:51.924403    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:51.924414    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:51.924423    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:51.924431    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:51.924437    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:51.924443    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:51.924457    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:51.924479    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:51.924486    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:51.924494    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:51.924502    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:51.924514    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:51.924526    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:51.924534    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:51.924542    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:51.924555    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:53.925006    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 13
	I1213 12:06:53.925021    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:53.925109    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:53.926272    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:53.926395    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:53.926409    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:53.926415    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:53.926425    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:53.926445    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:53.926455    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:53.926463    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:53.926470    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:53.926477    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:53.926483    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:53.926501    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:53.926507    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:53.926514    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:53.926519    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:53.926527    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:53.926535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:53.926542    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:53.926549    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:53.926561    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:53.926569    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:55.928629    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 14
	I1213 12:06:55.928644    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:55.928736    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:55.929797    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:55.929882    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:55.929890    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:55.929911    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:55.929917    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:55.929923    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:55.929928    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:55.929934    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:55.929943    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:55.929961    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:55.929972    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:55.929979    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:55.929985    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:55.929992    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:55.929999    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:55.930007    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:55.930020    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:55.930031    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:55.930046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:55.930056    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:55.930064    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:57.930436    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 15
	I1213 12:06:57.930463    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:57.930561    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:57.931555    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:57.931657    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:57.931687    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:57.931695    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:57.931700    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:57.931707    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:57.931714    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:57.931721    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:57.931728    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:57.931736    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:57.931744    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:57.931750    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:57.931757    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:57.931763    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:57.931771    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:57.931776    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:57.931793    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:57.931806    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:57.931821    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:57.931835    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:57.931844    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:06:59.933102    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 16
	I1213 12:06:59.933115    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:06:59.933226    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:06:59.934254    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:06:59.934427    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:06:59.934438    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:06:59.934445    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:06:59.934451    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:06:59.934481    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:06:59.934498    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:06:59.934507    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:06:59.934515    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:06:59.934524    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:06:59.934532    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:06:59.934543    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:06:59.934552    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:06:59.934560    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:06:59.934568    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:06:59.934583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:06:59.934596    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:06:59.934611    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:06:59.934622    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:06:59.934631    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:06:59.934639    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:01.935799    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 17
	I1213 12:07:01.935815    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:01.935870    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:01.936923    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:01.937020    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:01.937039    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:01.937046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:01.937053    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:01.937071    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:01.937083    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:01.937090    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:01.937098    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:01.937107    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:01.937115    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:01.937131    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:01.937144    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:01.937158    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:01.937172    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:01.937180    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:01.937188    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:01.937196    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:01.937204    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:01.937222    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:01.937235    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:04.072018    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 18
	I1213 12:07:04.072033    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:04.072093    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:04.073139    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:04.073250    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:04.073261    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:04.073276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:04.073285    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:04.073300    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:04.073315    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:04.073325    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:04.073333    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:04.073341    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:04.073348    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:04.073355    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:04.073361    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:04.073367    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:04.073374    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:04.073397    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:04.073409    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:04.073418    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:04.073426    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:04.073433    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:04.073438    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:06.074266    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 19
	I1213 12:07:06.074282    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:06.074358    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:06.075641    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:06.075777    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:06.075787    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:06.075793    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:06.075802    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:06.075812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:06.075820    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:06.075827    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:06.075841    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:06.075854    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:06.075865    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:06.075878    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:06.075887    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:06.075902    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:06.075914    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:06.075931    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:06.075950    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:06.075959    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:06.075967    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:06.075974    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:06.075998    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:08.077977    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 20
	I1213 12:07:08.077994    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:08.078056    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:08.079102    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:08.079192    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:08.079212    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:08.079233    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:08.079243    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:08.079251    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:08.079258    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:08.079264    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:08.079270    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:08.079283    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:08.079291    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:08.079298    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:08.079305    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:08.079313    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:08.079319    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:08.079325    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:08.079333    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:08.079343    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:08.079349    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:08.079355    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:08.079362    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:10.080838    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 21
	I1213 12:07:10.080853    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:10.080904    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:10.081933    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:10.082056    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:10.082065    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:10.082074    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:10.082079    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:10.082092    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:10.082114    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:10.082122    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:10.082129    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:10.082137    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:10.082150    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:10.082170    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:10.082189    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:10.082199    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:10.082213    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:10.082233    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:10.082241    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:10.082249    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:10.082256    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:10.082263    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:10.082277    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:12.084290    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 22
	I1213 12:07:12.084306    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:12.084368    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:12.085394    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:12.085466    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:12.085477    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:12.085487    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:12.085492    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:12.085499    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:12.085506    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:12.085513    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:12.085522    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:12.085530    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:12.085538    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:12.085547    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:12.085554    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:12.085560    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:12.085566    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:12.085572    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:12.085580    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:12.085588    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:12.085595    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:12.085609    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:12.085622    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:14.086383    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 23
	I1213 12:07:14.086397    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:14.086448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:14.087446    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:14.087580    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:14.087589    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:14.087596    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:14.087603    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:14.087609    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:14.087623    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:14.087649    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:14.087660    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:14.087669    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:14.087688    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:14.087720    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:14.087727    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:14.087733    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:14.087743    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:14.087749    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:14.087756    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:14.087770    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:14.087791    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:14.087798    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:14.087808    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:16.089439    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 24
	I1213 12:07:16.089455    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:16.089524    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:16.090563    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:16.090695    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:16.090705    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:16.090713    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:16.090718    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:16.090726    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:16.090737    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:16.090744    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:16.090750    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:16.090756    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:16.090762    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:16.090769    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:16.090777    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:16.090784    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:16.090790    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:16.090799    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:16.090806    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:16.090814    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:16.090820    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:16.090828    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:16.090836    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:18.091656    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 25
	I1213 12:07:18.091671    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:18.091703    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:18.092752    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:18.092836    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:18.092846    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:18.092859    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:18.092865    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:18.092876    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:18.092890    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:18.092900    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:18.092909    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:18.092926    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:18.092942    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:18.092950    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:18.092956    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:18.092979    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:18.092991    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:18.092998    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:18.093004    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:18.093011    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:18.093019    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:18.093027    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:18.093033    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:20.095063    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 26
	I1213 12:07:20.095077    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:20.095137    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:20.096318    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:20.096418    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:20.096448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:20.096457    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:20.096462    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:20.096468    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:20.096474    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:20.096494    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:20.096506    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:20.096526    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:20.096535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:20.096542    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:20.096547    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:20.096554    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:20.096561    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:20.096583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:20.096595    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:20.096602    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:20.096610    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:20.096617    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:20.096623    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:22.098684    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 27
	I1213 12:07:22.098700    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:22.098745    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:22.099742    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:22.099841    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:22.099851    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:22.099862    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:22.099872    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:22.099892    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:22.099902    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:22.099910    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:22.099915    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:22.099928    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:22.099936    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:22.099942    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:22.099950    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:22.099956    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:22.099963    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:22.099981    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:22.099994    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:22.100001    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:22.100010    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:22.100017    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:22.100024    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:24.102018    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 28
	I1213 12:07:24.102032    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:24.102075    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:24.103081    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:24.103174    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:24.103186    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:24.103204    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:24.103215    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:24.103222    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:24.103231    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:24.103239    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:24.103244    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:24.103250    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:24.103256    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:24.103263    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:24.103270    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:24.103276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:24.103289    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:24.103296    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:24.103304    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:24.103313    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:24.103321    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:24.103336    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:24.103344    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:26.105405    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 29
	I1213 12:07:26.105425    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:26.105456    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:26.106559    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 8e:83:7a:c4:96:d4 in /var/db/dhcpd_leases ...
	I1213 12:07:26.106683    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:07:26.106691    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:07:26.106698    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:07:26.106707    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:07:26.106714    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:07:26.106719    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:07:26.106748    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:07:26.106764    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:07:26.106773    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:07:26.106780    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:07:26.106789    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:07:26.106804    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:07:26.106812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:07:26.106820    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:07:26.106837    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:07:26.106849    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:07:26.106857    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:07:26.106865    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:07:26.106872    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:07:26.106880    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:07:28.108507    7577 client.go:171] duration metric: took 1m0.832751288s to LocalClient.Create
	I1213 12:07:30.109846    7577 start.go:128] duration metric: took 1m2.868926588s to createHost
	I1213 12:07:30.109860    7577 start.go:83] releasing machines lock for "force-systemd-env-990000", held for 1m2.869070549s
	W1213 12:07:30.109924    7577 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:83:7a:c4:96:d4
	I1213 12:07:30.110281    7577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:07:30.110325    7577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:07:30.122313    7577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53822
	I1213 12:07:30.122720    7577 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:07:30.123234    7577 main.go:141] libmachine: Using API Version  1
	I1213 12:07:30.123269    7577 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:07:30.123559    7577 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:07:30.124004    7577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:07:30.124048    7577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:07:30.135765    7577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53824
	I1213 12:07:30.136106    7577 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:07:30.136449    7577 main.go:141] libmachine: Using API Version  1
	I1213 12:07:30.136466    7577 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:07:30.136765    7577 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:07:30.136881    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .GetState
	I1213 12:07:30.136989    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.137052    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:30.138271    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .DriverName
	I1213 12:07:30.159319    7577 out.go:177] * Deleting "force-systemd-env-990000" in hyperkit ...
	I1213 12:07:30.201169    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .Remove
	I1213 12:07:30.201307    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.201317    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.201377    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:30.202572    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:30.202634    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | waiting for graceful shutdown
	I1213 12:07:31.203017    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:31.203117    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:31.204374    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | waiting for graceful shutdown
	I1213 12:07:32.204590    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:32.204650    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:32.206209    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | waiting for graceful shutdown
	I1213 12:07:33.206380    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:33.206466    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:33.207194    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | waiting for graceful shutdown
	I1213 12:07:34.209347    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:34.209393    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:34.210576    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | waiting for graceful shutdown
	I1213 12:07:35.211436    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:07:35.211538    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7597
	I1213 12:07:35.212254    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | sending sigkill
	I1213 12:07:35.212263    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1213 12:07:35.225572    7577 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:83:7a:c4:96:d4
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:83:7a:c4:96:d4
	I1213 12:07:35.225596    7577 start.go:729] Will try again in 5 seconds ...
	I1213 12:07:35.234658    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:07:35 WARN : hyperkit: failed to read stderr: EOF
	I1213 12:07:35.234700    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:07:35 WARN : hyperkit: failed to read stdout: EOF
	I1213 12:07:40.227697    7577 start.go:360] acquireMachinesLock for force-systemd-env-990000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 12:08:33.222981    7577 start.go:364] duration metric: took 52.994683019s to acquireMachinesLock for "force-systemd-env-990000"
	I1213 12:08:33.223019    7577 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 12:08:33.223064    7577 start.go:125] createHost starting for "" (driver="hyperkit")
	I1213 12:08:33.246200    7577 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1213 12:08:33.246295    7577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 12:08:33.246338    7577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 12:08:33.258437    7577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53828
	I1213 12:08:33.258815    7577 main.go:141] libmachine: () Calling .GetVersion
	I1213 12:08:33.259323    7577 main.go:141] libmachine: Using API Version  1
	I1213 12:08:33.259360    7577 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 12:08:33.259632    7577 main.go:141] libmachine: () Calling .GetMachineName
	I1213 12:08:33.259798    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .GetMachineName
	I1213 12:08:33.259922    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .DriverName
	I1213 12:08:33.260110    7577 start.go:159] libmachine.API.Create for "force-systemd-env-990000" (driver="hyperkit")
	I1213 12:08:33.260140    7577 client.go:168] LocalClient.Create starting
	I1213 12:08:33.260202    7577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem
	I1213 12:08:33.260292    7577 main.go:141] libmachine: Decoding PEM data...
	I1213 12:08:33.260322    7577 main.go:141] libmachine: Parsing certificate...
	I1213 12:08:33.260397    7577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem
	I1213 12:08:33.260493    7577 main.go:141] libmachine: Decoding PEM data...
	I1213 12:08:33.260506    7577 main.go:141] libmachine: Parsing certificate...
	I1213 12:08:33.260519    7577 main.go:141] libmachine: Running pre-create checks...
	I1213 12:08:33.260525    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .PreCreateCheck
	I1213 12:08:33.260608    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:33.260634    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .GetConfigRaw
	I1213 12:08:33.268395    7577 main.go:141] libmachine: Creating machine...
	I1213 12:08:33.268404    7577 main.go:141] libmachine: (force-systemd-env-990000) Calling .Create
	I1213 12:08:33.268505    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:33.268723    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:08:33.268501    7642 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 12:08:33.268769    7577 main.go:141] libmachine: (force-systemd-env-990000) Downloading /Users/jenkins/minikube-integration/20090-800/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 12:08:33.625258    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:08:33.625192    7642 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/id_rsa...
	I1213 12:08:33.714597    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:08:33.714540    7642 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk...
	I1213 12:08:33.714611    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Writing magic tar header
	I1213 12:08:33.714621    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Writing SSH key tar header
	I1213 12:08:33.714970    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | I1213 12:08:33.714932    7642 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000 ...
	I1213 12:08:34.172046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:34.172068    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid
	I1213 12:08:34.172084    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Using UUID 16cd7c6c-75c6-44a2-ac97-f7dc23731842
	I1213 12:08:34.195970    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Generated MAC 32:ce:34:ac:4a:28
	I1213 12:08:34.195985    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000
	I1213 12:08:34.196017    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"16cd7c6c-75c6-44a2-ac97-f7dc23731842", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:08:34.196046    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"16cd7c6c-75c6-44a2-ac97-f7dc23731842", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 12:08:34.196119    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "16cd7c6c-75c6-44a2-ac97-f7dc23731842", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-e
nv-990000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000"}
	I1213 12:08:34.196159    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 16cd7c6c-75c6-44a2-ac97-f7dc23731842 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/force-systemd-env-990000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/bzimage,/Users/jenkins/minikube-integration/20090-80
0/.minikube/machines/force-systemd-env-990000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-990000"
	I1213 12:08:34.196171    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 12:08:34.199512    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 DEBUG: hyperkit: Pid is 7653
	I1213 12:08:34.199959    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 0
	I1213 12:08:34.199975    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:34.200123    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:34.201400    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:34.201532    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:34.201548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:34.201570    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:34.201583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:34.201592    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:34.201604    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:34.201616    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:34.201643    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:34.201678    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:34.201697    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:34.201711    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:34.201724    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:34.201732    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:34.201742    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:34.201756    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:34.201765    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:34.201792    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:34.201804    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:34.201816    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:34.201828    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:34.210449    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 12:08:34.218974    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/force-systemd-env-990000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 12:08:34.220020    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:08:34.220038    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:08:34.220049    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:08:34.220058    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:08:34.603517    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 12:08:34.603533    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 12:08:34.718785    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 12:08:34.718810    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 12:08:34.718833    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 12:08:34.718845    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 12:08:34.719645    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 12:08:34.719655    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 12:08:36.203814    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 1
	I1213 12:08:36.203830    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:36.203937    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:36.204970    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:36.205069    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:36.205081    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:36.205094    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:36.205103    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:36.205110    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:36.205136    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:36.205148    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:36.205160    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:36.205170    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:36.205178    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:36.205184    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:36.205201    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:36.205212    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:36.205223    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:36.205241    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:36.205251    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:36.205258    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:36.205266    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:36.205283    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:36.205294    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:38.206157    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 2
	I1213 12:08:38.206175    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:38.206206    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:38.207309    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:38.207401    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:38.207410    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:38.207428    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:38.207441    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:38.207455    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:38.207465    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:38.207473    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:38.207482    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:38.207505    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:38.207516    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:38.207524    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:38.207535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:38.207541    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:38.207562    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:38.207572    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:38.207583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:38.207592    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:38.207603    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:38.207611    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:38.207620    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:40.117779    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1213 12:08:40.117880    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1213 12:08:40.117888    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1213 12:08:40.137478    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | 2024/12/13 12:08:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1213 12:08:40.209756    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 3
	I1213 12:08:40.209784    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:40.210017    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:40.211834    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:40.212049    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:40.212062    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:40.212072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:40.212079    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:40.212087    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:40.212095    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:40.212118    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:40.212136    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:40.212161    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:40.212177    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:40.212190    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:40.212202    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:40.212215    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:40.212229    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:40.212238    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:40.212256    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:40.212266    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:40.212276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:40.212284    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:40.212295    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:42.212371    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 4
	I1213 12:08:42.212385    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:42.212448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:42.213509    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:42.213589    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:42.213603    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:42.213613    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:42.213619    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:42.213635    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:42.213660    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:42.213674    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:42.213684    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:42.213691    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:42.213709    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:42.213724    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:42.213734    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:42.213741    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:42.213749    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:42.213761    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:42.213772    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:42.213785    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:42.213800    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:42.213813    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:42.213824    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:44.213940    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 5
	I1213 12:08:44.213953    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:44.213985    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:44.215018    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:44.215095    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:44.215107    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:44.215125    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:44.215132    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:44.215140    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:44.215146    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:44.215155    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:44.215163    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:44.215170    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:44.215176    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:44.215183    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:44.215190    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:44.215197    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:44.215210    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:44.215233    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:44.215249    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:44.215257    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:44.215279    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:44.215291    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:44.215304    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:46.216930    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 6
	I1213 12:08:46.216945    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:46.216997    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:46.218072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:46.218169    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:46.218199    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:46.218207    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:46.218213    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:46.218219    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:46.218225    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:46.218237    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:46.218243    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:46.218249    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:46.218255    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:46.218262    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:46.218270    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:46.218285    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:46.218305    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:46.218312    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:46.218328    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:46.218336    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:46.218344    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:46.218352    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:46.218364    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:48.219115    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 7
	I1213 12:08:48.219129    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:48.219197    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:48.220297    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:48.220398    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:48.220406    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:48.220439    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:48.220444    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:48.220450    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:48.220463    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:48.220469    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:48.220476    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:48.220504    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:48.220522    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:48.220535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:48.220548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:48.220557    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:48.220565    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:48.220572    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:48.220582    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:48.220592    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:48.220598    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:48.220620    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:48.220652    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:50.222521    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 8
	I1213 12:08:50.222535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:50.222639    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:50.223667    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:50.223757    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:50.223765    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:50.223774    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:50.223780    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:50.223791    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:50.223799    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:50.223805    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:50.223828    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:50.223837    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:50.223845    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:50.223868    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:50.223879    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:50.223888    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:50.223895    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:50.223901    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:50.223909    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:50.223916    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:50.223925    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:50.223931    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:50.223939    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:52.226013    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 9
	I1213 12:08:52.226028    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:52.226072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:52.227075    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:52.227166    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:52.227186    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:52.227224    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:52.227237    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:52.227249    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:52.227261    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:52.227268    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:52.227276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:52.227282    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:52.227290    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:52.227297    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:52.227302    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:52.227318    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:52.227329    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:52.227345    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:52.227357    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:52.227371    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:52.227384    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:52.227394    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:52.227401    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:54.227806    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 10
	I1213 12:08:54.227819    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:54.227905    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:54.229168    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:54.229280    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:54.229320    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:54.229330    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:54.229338    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:54.229345    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:54.229351    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:54.229358    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:54.229378    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:54.229388    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:54.229397    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:54.229404    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:54.229413    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:54.229419    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:54.229427    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:54.229439    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:54.229448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:54.229456    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:54.229464    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:54.229475    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:54.229483    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:56.231130    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 11
	I1213 12:08:56.231145    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:56.231201    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:56.232202    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:56.232291    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:56.232309    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:56.232320    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:56.232327    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:56.232336    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:56.232344    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:56.232350    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:56.232363    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:56.232381    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:56.232394    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:56.232401    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:56.232409    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:56.232416    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:56.232422    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:56.232428    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:56.232434    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:56.232441    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:56.232446    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:56.232453    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:56.232466    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:08:58.234543    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 12
	I1213 12:08:58.234559    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:08:58.234617    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:08:58.235620    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:08:58.235714    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:08:58.235722    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:08:58.235730    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:08:58.235737    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:08:58.235743    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:08:58.235750    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:08:58.235756    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:08:58.235764    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:08:58.235791    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:08:58.235805    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:08:58.235812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:08:58.235820    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:08:58.235828    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:08:58.235836    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:08:58.235843    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:08:58.235851    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:08:58.235868    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:08:58.235877    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:08:58.235884    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:08:58.235892    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:00.236623    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 13
	I1213 12:09:00.236635    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:00.236731    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:00.237815    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:00.237919    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:00.237953    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:00.237961    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:00.237968    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:00.237975    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:00.237981    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:00.237993    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:00.238003    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:00.238011    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:00.238019    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:00.238026    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:00.238034    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:00.238042    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:00.238050    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:00.238066    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:00.238074    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:00.238081    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:00.238088    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:00.238095    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:00.238102    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:02.240191    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 14
	I1213 12:09:02.240204    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:02.240276    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:02.241289    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:02.241395    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:02.241406    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:02.241413    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:02.241419    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:02.241426    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:02.241431    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:02.241440    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:02.241449    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:02.241458    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:02.241467    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:02.241482    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:02.241511    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:02.241548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:02.241563    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:02.241575    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:02.241583    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:02.241604    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:02.241646    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:02.241658    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:02.241676    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:04.242639    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 15
	I1213 12:09:04.242669    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:04.242707    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:04.243694    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:04.243797    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:04.243809    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:04.243817    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:04.243822    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:04.243832    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:04.243837    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:04.243843    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:04.243850    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:04.243855    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:04.243873    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:04.243889    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:04.243896    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:04.243902    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:04.243907    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:04.243923    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:04.243935    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:04.243942    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:04.243950    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:04.243959    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:04.243967    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:06.245919    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 16
	I1213 12:09:06.245935    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:06.245980    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:06.247042    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:06.247216    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:06.247227    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:06.247236    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:06.247254    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:06.247290    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:06.247304    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:06.247314    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:06.247337    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:06.247347    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:06.247360    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:06.247374    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:06.247387    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:06.247402    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:06.247410    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:06.247417    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:06.247423    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:06.247429    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:06.247436    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:06.247443    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:06.247449    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:08.249126    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 17
	I1213 12:09:08.249137    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:08.249197    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:08.250306    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:08.250389    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:08.250399    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:08.250407    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:08.250415    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:08.250421    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:08.250427    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:08.250434    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:08.250440    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:08.250448    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:08.250454    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:08.250460    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:08.250485    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:08.250495    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:08.250506    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:08.250517    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:08.250523    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:08.250530    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:08.250536    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:08.250541    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:08.250548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:10.251313    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 18
	I1213 12:09:10.251329    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:10.251409    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:10.252392    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:10.252478    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:10.252489    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:10.252498    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:10.252507    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:10.252514    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:10.252522    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:10.252528    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:10.252533    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:10.252548    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:10.252561    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:10.252571    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:10.252582    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:10.252592    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:10.252602    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:10.252610    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:10.252617    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:10.252628    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:10.252635    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:10.252644    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:10.252652    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:12.254704    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 19
	I1213 12:09:12.254720    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:12.254833    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:12.255842    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:12.255940    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:12.255948    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:12.255958    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:12.255963    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:12.255969    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:12.255974    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:12.255980    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:12.255987    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:12.255993    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:12.256000    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:12.256008    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:12.256024    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:12.256038    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:12.256051    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:12.256058    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:12.256066    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:12.256073    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:12.256079    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:12.256093    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:12.256105    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:14.257219    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 20
	I1213 12:09:14.257234    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:14.257279    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:14.258268    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:14.258418    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:14.258432    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:14.258440    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:14.258447    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:14.258456    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:14.258464    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:14.258471    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:14.258484    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:14.258492    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:14.258499    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:14.258505    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:14.258513    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:14.258521    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:14.258529    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:14.258544    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:14.258557    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:14.258565    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:14.258571    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:14.258577    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:14.258585    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:16.260600    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 21
	I1213 12:09:16.260616    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:16.260670    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:16.261668    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:16.261736    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:16.261748    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:16.261757    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:16.261762    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:16.261768    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:16.261774    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:16.261780    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:16.261791    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:16.261799    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:16.261805    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:16.261812    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:16.261817    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:16.261836    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:16.261848    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:16.261874    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:16.261885    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:16.261892    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:16.261897    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:16.261904    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:16.261911    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:18.263949    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 22
	I1213 12:09:18.263966    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:18.264036    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:18.265026    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:18.265123    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:18.265135    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:18.265158    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:18.265168    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:18.265178    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:18.265183    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:18.265201    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:18.265215    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:18.265224    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:18.265244    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:18.265252    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:18.265260    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:18.265265    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:18.265274    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:18.265282    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:18.265288    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:18.265296    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:18.265303    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:18.265312    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:18.265320    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:20.265839    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 23
	I1213 12:09:20.265858    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:20.265920    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:20.266928    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:20.266984    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:20.266997    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:20.267010    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:20.267017    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:20.267024    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:20.267031    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:20.267037    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:20.267044    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:20.267050    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:20.267057    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:20.267072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:20.267086    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:20.267095    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:20.267102    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:20.267108    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:20.267114    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:20.267131    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:20.267145    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:20.267153    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:20.267166    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:22.268655    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 24
	I1213 12:09:22.268671    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:22.268702    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:22.269814    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:22.269915    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:22.269927    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:22.269959    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:22.269985    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:22.269996    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:22.270013    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:22.270025    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:22.270033    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:22.270041    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:22.270048    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:22.270056    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:22.270068    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:22.270078    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:22.270093    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:22.270101    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:22.270108    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:22.270116    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:22.270126    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:22.270133    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:22.270141    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:24.272128    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 25
	I1213 12:09:24.272142    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:24.272184    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:24.273189    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:24.273264    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:24.273274    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:24.273283    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:24.273288    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:24.273296    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:24.273304    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:24.273311    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:24.273317    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:24.273331    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:24.273345    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:24.273354    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:24.273362    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:24.273376    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:24.273384    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:24.273396    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:24.273402    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:24.273408    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:24.273415    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:24.273423    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:24.273434    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:26.274729    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 26
	I1213 12:09:26.274747    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:26.274806    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:26.275902    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:26.275985    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:26.275996    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:26.276005    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:26.276010    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:26.276026    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:26.276036    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:26.276042    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:26.276049    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:26.276072    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:26.276088    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:26.276103    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:26.276118    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:26.276132    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:26.276145    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:26.276152    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:26.276159    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:26.276166    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:26.276172    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:26.276181    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:26.276189    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:28.276921    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 27
	I1213 12:09:28.276933    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:28.276942    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:28.277973    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:28.278036    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:28.278076    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:28.278087    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:28.278102    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:28.278114    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:28.278123    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:28.278132    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:28.278140    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:28.278147    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:28.278154    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:28.278160    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:28.278169    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:28.278175    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:28.278183    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:28.278200    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:28.278207    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:28.278214    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:28.278219    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:28.278227    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:28.278236    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:30.280246    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 28
	I1213 12:09:30.280261    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:30.280306    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:30.281309    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:30.281406    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:30.281423    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:30.281437    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:30.281446    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:30.281452    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:30.281458    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:30.281464    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:30.281480    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:30.281487    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:30.281494    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:30.281501    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:30.281512    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:30.281519    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:30.281527    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:30.281535    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:30.281542    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:30.281549    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:30.281555    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:30.281562    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:30.281570    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:32.283656    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Attempt 29
	I1213 12:09:32.283668    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 12:09:32.283723    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | hyperkit pid from json: 7653
	I1213 12:09:32.284795    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Searching for 32:ce:34:ac:4a:28 in /var/db/dhcpd_leases ...
	I1213 12:09:32.284842    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1213 12:09:32.284855    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:8a:62:a2:dc:49:89 ID:1,8a:62:a2:dc:49:89 Lease:0x675ca12d}
	I1213 12:09:32.284863    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:26:f7:dc:6d:c4:17 ID:1,26:f7:dc:6d:c4:17 Lease:0x675ca057}
	I1213 12:09:32.284870    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:52:ac:3b:bb:5d:5d ID:1,52:ac:3b:bb:5d:5d Lease:0x675c91d4}
	I1213 12:09:32.284876    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:56:28:c0:a1:59:cc ID:1,56:28:c0:a1:59:cc Lease:0x675c912e}
	I1213 12:09:32.284881    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:62:85:56:4d:0f:39 ID:1,62:85:56:4d:f:39 Lease:0x675c9f86}
	I1213 12:09:32.284888    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:be:17:00:18:99:2e ID:1,be:17:0:18:99:2e Lease:0x675c9f5a}
	I1213 12:09:32.284898    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:22:d0:a8:35:b8:ff ID:1,22:d0:a8:35:b8:ff Lease:0x675c9d03}
	I1213 12:09:32.284906    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:d6:82:ed:82:72:d9 ID:1,d6:82:ed:82:72:d9 Lease:0x675c9cda}
	I1213 12:09:32.284913    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:e8:d7:5b:c6:97 ID:1,ce:e8:d7:5b:c6:97 Lease:0x675c9c83}
	I1213 12:09:32.284920    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a6:fc:e1:45:c0:18 ID:1,a6:fc:e1:45:c0:18 Lease:0x675c8e67}
	I1213 12:09:32.284928    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ae:fd:e9:0f:81:f3 ID:1,ae:fd:e9:f:81:f3 Lease:0x675c9bf1}
	I1213 12:09:32.284936    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c9bc0}
	I1213 12:09:32.284947    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c8d0f}
	I1213 12:09:32.284956    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9b5f}
	I1213 12:09:32.284962    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9b4c}
	I1213 12:09:32.284969    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:52:a5:b7:f3:da:7b ID:1,52:a5:b7:f3:da:7b Lease:0x675c95de}
	I1213 12:09:32.284976    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ca:b1:2f:27:0e:f7 ID:1,ca:b1:2f:27:e:f7 Lease:0x675c95cd}
	I1213 12:09:32.284986    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:66:31:08:db:fa:19 ID:1,66:31:8:db:fa:19 Lease:0x675c871a}
	I1213 12:09:32.284994    7577 main.go:141] libmachine: (force-systemd-env-990000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:52:7c:fd:bc:7b:9c ID:1,52:7c:fd:bc:7b:9c Lease:0x675c9302}
	I1213 12:09:34.287134    7577 client.go:171] duration metric: took 1m1.026312089s to LocalClient.Create
	I1213 12:09:36.289297    7577 start.go:128] duration metric: took 1m3.065529078s to createHost
	I1213 12:09:36.289345    7577 start.go:83] releasing machines lock for "force-systemd-env-990000", held for 1m3.065652585s
	W1213 12:09:36.289456    7577 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-990000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ce:34:ac:4a:28
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-990000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ce:34:ac:4a:28
	I1213 12:09:36.373468    7577 out.go:201] 
	W1213 12:09:36.395591    7577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ce:34:ac:4a:28
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ce:34:ac:4a:28
	W1213 12:09:36.395607    7577 out.go:270] * 
	* 
	W1213 12:09:36.396261    7577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:09:36.457595    7577 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-990000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-990000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-990000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (201.268854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-990000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-990000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-13 12:09:36.774843 -0800 PST m=+4024.642042724
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-990000 -n force-systemd-env-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-990000 -n force-systemd-env-990000: exit status 7 (100.822138ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:09:36.873424    7689 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:09:36.873449    7689 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-990000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-990000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-990000: (5.291880356s)
--- FAIL: TestForceSystemdEnv (233.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (289.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-224000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-224000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-224000 -v=7 --alsologtostderr: (27.097348441s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-224000 --wait=true -v=7 --alsologtostderr
E1213 11:33:42.117707    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:34:19.490371    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:34:47.204157    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-224000 --wait=true -v=7 --alsologtostderr: exit status 90 (4m17.806226393s)

                                                
                                                
-- stdout --
	* [ha-224000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-224000" primary control-plane node in "ha-224000" cluster
	* Restarting existing hyperkit VM for "ha-224000" ...
	* Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	* Enabled addons: 
	
	* Starting "ha-224000-m02" control-plane node in "ha-224000" cluster
	* Restarting existing hyperkit VM for "ha-224000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.6
	* Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	  - env NO_PROXY=192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-224000-m03" control-plane node in "ha-224000" cluster
	* Restarting existing hyperkit VM for "ha-224000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.6,192.169.0.7
	* Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	  - env NO_PROXY=192.169.0.6
	  - env NO_PROXY=192.169.0.6,192.169.0.7
	* Verifying Kubernetes components...
	
	* Starting "ha-224000-m04" worker node in "ha-224000" cluster
	* Restarting existing hyperkit VM for "ha-224000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:33:23.556546    5233 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:33:23.556761    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556766    5233 out.go:358] Setting ErrFile to fd 2...
	I1213 11:33:23.556770    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556939    5233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:33:23.558493    5233 out.go:352] Setting JSON to false
	I1213 11:33:23.588845    5233 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1973,"bootTime":1734116430,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:33:23.588936    5233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:33:23.610818    5233 out.go:177] * [ha-224000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:33:23.652607    5233 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:33:23.652667    5233 notify.go:220] Checking for updates...
	I1213 11:33:23.695155    5233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:23.716580    5233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:33:23.758076    5233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:33:23.778447    5233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:33:23.799542    5233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:33:23.821105    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:23.821299    5233 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:33:23.821877    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.821927    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:23.834367    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51814
	I1213 11:33:23.834740    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:23.835143    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:23.835152    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:23.835371    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:23.835545    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:23.867473    5233 out.go:177] * Using the hyperkit driver based on existing profile
	I1213 11:33:23.909252    5233 start.go:297] selected driver: hyperkit
	I1213 11:33:23.909282    5233 start.go:901] validating driver "hyperkit" against &{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.909534    5233 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:33:23.909725    5233 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.909981    5233 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:33:23.922579    5233 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:33:23.929434    5233 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.929452    5233 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:33:23.935885    5233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:33:23.935924    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:23.935972    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:23.936044    5233 start.go:340] cluster config:
	{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.936181    5233 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.978382    5233 out.go:177] * Starting "ha-224000" primary control-plane node in "ha-224000" cluster
	I1213 11:33:23.999338    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:23.999406    5233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 11:33:23.999429    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:23.999602    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:23.999621    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:23.999813    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.000837    5233 start.go:360] acquireMachinesLock for ha-224000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:24.000950    5233 start.go:364] duration metric: took 87.843µs to acquireMachinesLock for "ha-224000"
	I1213 11:33:24.000984    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:24.001006    5233 fix.go:54] fixHost starting: 
	I1213 11:33:24.001462    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:24.001491    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:24.013395    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I1213 11:33:24.013731    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:24.014113    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:24.014132    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:24.014335    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:24.014453    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.014563    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:33:24.014649    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.014739    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 4112
	I1213 11:33:24.015879    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.015946    5233 fix.go:112] recreateIfNeeded on ha-224000: state=Stopped err=<nil>
	I1213 11:33:24.015971    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	W1213 11:33:24.016061    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:24.037410    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000" ...
	I1213 11:33:24.058353    5233 main.go:141] libmachine: (ha-224000) Calling .Start
	I1213 11:33:24.058516    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.058530    5233 main.go:141] libmachine: (ha-224000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid
	I1213 11:33:24.059997    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.060006    5233 main.go:141] libmachine: (ha-224000) DBG | pid 4112 is in state "Stopped"
	I1213 11:33:24.060020    5233 main.go:141] libmachine: (ha-224000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid...
	I1213 11:33:24.060148    5233 main.go:141] libmachine: (ha-224000) DBG | Using UUID b2cf51fb-709d-45fe-a947-282a845e5503
	I1213 11:33:24.195839    5233 main.go:141] libmachine: (ha-224000) DBG | Generated MAC e2:1f:26:f2:db:4d
	I1213 11:33:24.195876    5233 main.go:141] libmachine: (ha-224000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:24.196013    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196037    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196083    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b2cf51fb-709d-45fe-a947-282a845e5503", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:24.196130    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b2cf51fb-709d-45fe-a947-282a845e5503 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:24.196149    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:24.198377    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Pid is 5248
	I1213 11:33:24.198751    5233 main.go:141] libmachine: (ha-224000) DBG | Attempt 0
	I1213 11:33:24.198766    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.198839    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:33:24.200071    5233 main.go:141] libmachine: (ha-224000) DBG | Searching for e2:1f:26:f2:db:4d in /var/db/dhcpd_leases ...
	I1213 11:33:24.200197    5233 main.go:141] libmachine: (ha-224000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:24.200237    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:24.200259    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:24.200275    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:33:24.200287    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9849}
	I1213 11:33:24.200302    5233 main.go:141] libmachine: (ha-224000) DBG | Found match: e2:1f:26:f2:db:4d
	I1213 11:33:24.200309    5233 main.go:141] libmachine: (ha-224000) DBG | IP: 192.169.0.6
	I1213 11:33:24.200346    5233 main.go:141] libmachine: (ha-224000) Calling .GetConfigRaw
	I1213 11:33:24.201046    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:24.201273    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.201998    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:24.202010    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.202152    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:24.202253    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:24.202345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202460    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202575    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:24.202734    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:24.202918    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:24.202926    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:24.209830    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:24.275074    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:24.275977    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.275998    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.276018    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.276028    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.664445    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:24.664462    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:24.779029    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.779050    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.779061    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.779087    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.779925    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:24.779935    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:30.509300    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:30.509378    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:30.509389    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:30.535654    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:33:35.263286    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:33:35.263305    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263484    5233 buildroot.go:166] provisioning hostname "ha-224000"
	I1213 11:33:35.263495    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263594    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.263690    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.263795    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263879    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263974    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.264111    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.264249    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.264257    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000 && echo "ha-224000" | sudo tee /etc/hostname
	I1213 11:33:35.330220    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000
	
	I1213 11:33:35.330242    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.330385    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.330487    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330579    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330683    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.330825    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.330962    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.330973    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:33:35.395347    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:33:35.395367    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:33:35.395380    5233 buildroot.go:174] setting up certificates
	I1213 11:33:35.395390    5233 provision.go:84] configureAuth start
	I1213 11:33:35.395396    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.395536    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:35.395626    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.395729    5233 provision.go:143] copyHostCerts
	I1213 11:33:35.395759    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395813    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:33:35.395824    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395941    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:33:35.396166    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396198    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:33:35.396203    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396305    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:33:35.396479    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396511    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:33:35.396516    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396585    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:33:35.396750    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000 san=[127.0.0.1 192.169.0.6 ha-224000 localhost minikube]
	I1213 11:33:35.608012    5233 provision.go:177] copyRemoteCerts
	I1213 11:33:35.608088    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:33:35.608110    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.608273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.608376    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.608484    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.608616    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:35.643782    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:33:35.643849    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:33:35.663504    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:33:35.663563    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 11:33:35.683076    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:33:35.683137    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:33:35.702561    5233 provision.go:87] duration metric: took 307.16247ms to configureAuth
	I1213 11:33:35.702573    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:33:35.702742    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:35.702756    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:35.702886    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.702984    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.703073    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703154    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703252    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.703383    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.703507    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.703514    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:33:35.761527    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:33:35.761539    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:33:35.761614    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:33:35.761631    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.761761    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.761867    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.761952    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.762029    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.762180    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.762322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.762369    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:33:35.829448    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:33:35.829473    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.829611    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.829710    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829804    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829882    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.830037    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.830180    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.830192    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:33:37.506714    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:33:37.506731    5233 machine.go:96] duration metric: took 13.304830015s to provisionDockerMachine
	I1213 11:33:37.506744    5233 start.go:293] postStartSetup for "ha-224000" (driver="hyperkit")
	I1213 11:33:37.506752    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:33:37.506763    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.506964    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:33:37.506981    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.507084    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.507184    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.507273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.507359    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.549053    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:33:37.553822    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:33:37.553837    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:33:37.553928    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:33:37.554104    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:33:37.554111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:33:37.554283    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:33:37.567654    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:37.594179    5233 start.go:296] duration metric: took 87.426295ms for postStartSetup
	I1213 11:33:37.594207    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.594408    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:33:37.594421    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.594508    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.594590    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.594724    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.594816    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.628799    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:33:37.628871    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:33:37.659933    5233 fix.go:56] duration metric: took 13.659041433s for fixHost
	I1213 11:33:37.659954    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.660095    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.660190    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660283    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660359    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.660499    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:37.660647    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:37.660654    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:33:37.718237    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118417.855687365
	
	I1213 11:33:37.718250    5233 fix.go:216] guest clock: 1734118417.855687365
	I1213 11:33:37.718256    5233 fix.go:229] Guest: 2024-12-13 11:33:37.855687365 -0800 PST Remote: 2024-12-13 11:33:37.659944 -0800 PST m=+14.144143612 (delta=195.743365ms)
	I1213 11:33:37.718279    5233 fix.go:200] guest clock delta is within tolerance: 195.743365ms
	I1213 11:33:37.718284    5233 start.go:83] releasing machines lock for "ha-224000", held for 13.717432141s
	I1213 11:33:37.718302    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718458    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:37.718557    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718855    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718959    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.719072    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:33:37.719100    5233 ssh_runner.go:195] Run: cat /version.json
	I1213 11:33:37.719104    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719118    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719221    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719232    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719360    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719454    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719480    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719588    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.719609    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.801992    5233 ssh_runner.go:195] Run: systemctl --version
	I1213 11:33:37.807211    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:33:37.811454    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:33:37.811510    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:33:37.823724    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:33:37.823735    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:37.823838    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:37.842317    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:33:37.851247    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:33:37.859919    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:33:37.859977    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:33:37.868699    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.877385    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:33:37.885895    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.894631    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:33:37.903433    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:33:37.912080    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:33:37.920838    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:33:37.929686    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:33:37.937526    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:33:37.937575    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:33:37.946343    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:33:37.954321    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.055814    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:33:38.074538    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:38.074638    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:33:38.087031    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.101085    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:33:38.116013    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.126951    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.137488    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:33:38.158482    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.168678    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:38.183844    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:33:38.186730    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:33:38.193926    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:33:38.207186    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:33:38.306381    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:33:38.409182    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:33:38.409284    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:33:38.423485    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.520298    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:33:40.856468    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336161165s)
	I1213 11:33:40.856560    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:33:40.867785    5233 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 11:33:40.881291    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:40.891767    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:33:40.985833    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:33:41.094364    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.203166    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:33:41.217499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:41.228676    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.322265    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:33:41.392321    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:33:41.392423    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:33:41.396866    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:33:41.396929    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:33:41.400110    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:33:41.428478    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:33:41.428562    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.446343    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.486067    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:33:41.486118    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:41.486570    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:33:41.490428    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.500921    5233 kubeadm.go:883] updating cluster {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:33:41.501009    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:41.501080    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.514302    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.514313    5233 docker.go:619] Images already preloaded, skipping extraction
	I1213 11:33:41.514404    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.528088    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.528111    5233 cache_images.go:84] Images are preloaded, skipping loading
	I1213 11:33:41.528123    5233 kubeadm.go:934] updating node { 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1213 11:33:41.528195    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:33:41.528276    5233 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 11:33:41.563286    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:41.563301    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:41.563314    5233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 11:33:41.563331    5233 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.6 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-224000 NodeName:ha-224000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:33:41.563411    5233 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-224000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.6"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.6"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:33:41.563429    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:33:41.563502    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:33:41.577356    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:33:41.577431    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:33:41.577503    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:33:41.586076    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:33:41.586130    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 11:33:41.593693    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1213 11:33:41.607111    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:33:41.620717    5233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1213 11:33:41.634595    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:33:41.648138    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:33:41.651088    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.660611    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.764209    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:33:41.776920    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.6
	I1213 11:33:41.776935    5233 certs.go:194] generating shared ca certs ...
	I1213 11:33:41.776947    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.777111    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:33:41.777172    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:33:41.777182    5233 certs.go:256] generating profile certs ...
	I1213 11:33:41.777268    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:33:41.777289    5233 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848
	I1213 11:33:41.777307    5233 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.6 192.169.0.7 192.169.0.8 192.169.0.254]
	I1213 11:33:41.924008    5233 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 ...
	I1213 11:33:41.924024    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848: {Name:mk14c8bdd605a32a15c7e818d08d02d64b9be917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925000    5233 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 ...
	I1213 11:33:41.925011    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848: {Name:mk0673ccf9e28132db2b00d320fea4d73482d286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925290    5233 certs.go:381] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt
	I1213 11:33:41.925479    5233 certs.go:385] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key
	I1213 11:33:41.925688    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:33:41.925697    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:33:41.925721    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:33:41.925741    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:33:41.925761    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:33:41.925780    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:33:41.925802    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:33:41.925823    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:33:41.925841    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:33:41.925928    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:33:41.925965    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:33:41.925979    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:33:41.926013    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:33:41.926042    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:33:41.926077    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:33:41.926146    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:41.926184    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:33:41.926207    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:41.926225    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:33:41.927710    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:33:41.951166    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:33:41.975929    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:33:42.015520    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:33:42.051250    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:33:42.097395    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:33:42.139215    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:33:42.167922    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:33:42.188284    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:33:42.207671    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:33:42.226762    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:33:42.245781    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:33:42.259332    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:33:42.263629    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:33:42.272753    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276074    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276126    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.280400    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:33:42.289318    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:33:42.298635    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301936    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301986    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.306272    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:33:42.315219    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:33:42.324178    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327536    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327583    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.331821    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:33:42.340849    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:33:42.344177    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:33:42.348774    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:33:42.353021    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:33:42.357742    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:33:42.361999    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:33:42.366226    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:33:42.370715    5233 kubeadm.go:392] StartCluster: {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:42.370839    5233 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 11:33:42.382402    5233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:33:42.390619    5233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 11:33:42.390630    5233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 11:33:42.390688    5233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:33:42.399169    5233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:42.399486    5233 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-224000" does not appear in /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.399573    5233 kubeconfig.go:62] /Users/jenkins/minikube-integration/20090-800/kubeconfig needs updating (will repair): [kubeconfig missing "ha-224000" cluster setting kubeconfig missing "ha-224000" context setting]
	I1213 11:33:42.399754    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.400139    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.400368    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:33:42.400704    5233 cert_rotation.go:140] Starting client certificate rotation controller
	I1213 11:33:42.400887    5233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:33:42.408731    5233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.6
	I1213 11:33:42.408748    5233 kubeadm.go:597] duration metric: took 18.113581ms to restartPrimaryControlPlane
	I1213 11:33:42.408754    5233 kubeadm.go:394] duration metric: took 38.045507ms to StartCluster
	I1213 11:33:42.408764    5233 settings.go:142] acquiring lock: {Name:mk0626482d1a77203bd9c1b6d841b6780f4771c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.408852    5233 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.409247    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.409470    5233 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:33:42.409483    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:33:42.409500    5233 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:33:42.409614    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.452999    5233 out.go:177] * Enabled addons: 
	I1213 11:33:42.473889    5233 addons.go:510] duration metric: took 64.391249ms for enable addons: enabled=[]
	I1213 11:33:42.473995    5233 start.go:246] waiting for cluster config update ...
	I1213 11:33:42.474008    5233 start.go:255] writing updated cluster config ...
	I1213 11:33:42.496132    5233 out.go:201] 
	I1213 11:33:42.517570    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.517711    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.541038    5233 out.go:177] * Starting "ha-224000-m02" control-plane node in "ha-224000" cluster
	I1213 11:33:42.583131    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:42.583188    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:42.583372    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:42.583392    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:42.583516    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.584724    5233 start.go:360] acquireMachinesLock for ha-224000-m02: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:42.584832    5233 start.go:364] duration metric: took 83.288µs to acquireMachinesLock for "ha-224000-m02"
	I1213 11:33:42.584859    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:42.584868    5233 fix.go:54] fixHost starting: m02
	I1213 11:33:42.585263    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:42.585289    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:42.597490    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51838
	I1213 11:33:42.598009    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:42.598520    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:42.598537    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:42.598854    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:42.598984    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.599156    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:33:42.599250    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.599342    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5143
	I1213 11:33:42.600521    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.600553    5233 fix.go:112] recreateIfNeeded on ha-224000-m02: state=Stopped err=<nil>
	I1213 11:33:42.600561    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	W1213 11:33:42.600657    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:42.642952    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m02" ...
	I1213 11:33:42.664177    5233 main.go:141] libmachine: (ha-224000-m02) Calling .Start
	I1213 11:33:42.664494    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.664558    5233 main.go:141] libmachine: (ha-224000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid
	I1213 11:33:42.666694    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.666707    5233 main.go:141] libmachine: (ha-224000-m02) DBG | pid 5143 is in state "Stopped"
	I1213 11:33:42.666723    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid...
	I1213 11:33:42.667115    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Using UUID 573e64b1-a821-4bce-aba3-b379863bb495
	I1213 11:33:42.694947    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Generated MAC fa:54:eb:53:13:e6
	I1213 11:33:42.695001    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:42.695241    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695304    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695353    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "573e64b1-a821-4bce-aba3-b379863bb495", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:42.695424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 573e64b1-a821-4bce-aba3-b379863bb495 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:42.695442    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:42.697074    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Pid is 5263
	I1213 11:33:42.697519    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Attempt 0
	I1213 11:33:42.697548    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.697612    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:33:42.699596    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Searching for fa:54:eb:53:13:e6 in /var/db/dhcpd_leases ...
	I1213 11:33:42.699713    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:42.699733    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:33:42.699753    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:42.699767    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:42.699789    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found match: fa:54:eb:53:13:e6
	I1213 11:33:42.699807    5233 main.go:141] libmachine: (ha-224000-m02) DBG | IP: 192.169.0.7
	I1213 11:33:42.699845    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetConfigRaw
	I1213 11:33:42.700566    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:33:42.700747    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.701233    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:42.701243    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.701360    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:33:42.701474    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:33:42.701583    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701690    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701786    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:33:42.701932    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:42.702072    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:33:42.702079    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:42.708424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:42.717944    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:42.718853    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:42.718881    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:42.718896    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:42.718909    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.109099    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:43.109114    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:43.223848    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:43.223866    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:43.223877    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:43.223884    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.224755    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:43.224765    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:48.997042    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:48.997098    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:48.997108    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:49.020830    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:34:17.779287    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:34:17.779302    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779433    5233 buildroot.go:166] provisioning hostname "ha-224000-m02"
	I1213 11:34:17.779441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779556    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.779664    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.779746    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779835    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.780083    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.780222    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.780230    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m02 && echo "ha-224000-m02" | sudo tee /etc/hostname
	I1213 11:34:17.853511    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m02
	
	I1213 11:34:17.853529    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.853672    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.853764    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853853    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853936    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.854073    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.854254    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.854268    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:34:17.919686    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:34:17.919701    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:34:17.919711    5233 buildroot.go:174] setting up certificates
	I1213 11:34:17.919720    5233 provision.go:84] configureAuth start
	I1213 11:34:17.919727    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.919878    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:17.919996    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.920105    5233 provision.go:143] copyHostCerts
	I1213 11:34:17.920136    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920185    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:34:17.920199    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920354    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:34:17.920585    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920616    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:34:17.920621    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920688    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:34:17.920873    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920909    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:34:17.920914    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920981    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:34:17.921606    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m02 san=[127.0.0.1 192.169.0.7 ha-224000-m02 localhost minikube]
	I1213 11:34:18.018851    5233 provision.go:177] copyRemoteCerts
	I1213 11:34:18.018930    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:34:18.018950    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.019110    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.019222    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.019333    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.019447    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:18.056757    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:34:18.056824    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:34:18.076340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:34:18.076402    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:34:18.095849    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:34:18.095918    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:34:18.115722    5233 provision.go:87] duration metric: took 195.866505ms to configureAuth
	I1213 11:34:18.115736    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:34:18.115914    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:18.115934    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:18.116067    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.116155    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.116267    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116362    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116456    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.116584    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.116708    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.116716    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:34:18.177000    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:34:18.177013    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:34:18.177102    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:34:18.177115    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.177250    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.177339    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177434    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177521    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.177668    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.177802    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.177848    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:34:18.247535    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:34:18.247560    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.247701    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.247799    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247889    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247972    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.248144    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.248281    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.248294    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:34:19.945302    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:34:19.945316    5233 machine.go:96] duration metric: took 37.234619508s to provisionDockerMachine
	I1213 11:34:19.945325    5233 start.go:293] postStartSetup for "ha-224000-m02" (driver="hyperkit")
	I1213 11:34:19.945338    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:34:19.945348    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:19.945560    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:34:19.945574    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:19.945673    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:19.945782    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:19.945867    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:19.945970    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:19.983485    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:34:19.986722    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:34:19.986734    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:34:19.986812    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:34:19.986953    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:34:19.986959    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:34:19.987126    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:34:19.994240    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:20.014210    5233 start.go:296] duration metric: took 68.83207ms for postStartSetup
	I1213 11:34:20.014230    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.014422    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:34:20.014435    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.014537    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.014623    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.014704    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.014788    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.051647    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:34:20.051721    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:34:20.083772    5233 fix.go:56] duration metric: took 37.489367071s for fixHost
	I1213 11:34:20.083797    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.083942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.084018    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084114    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084207    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.084348    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:20.084490    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:20.084497    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:34:20.144388    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118460.015290153
	
	I1213 11:34:20.144404    5233 fix.go:216] guest clock: 1734118460.015290153
	I1213 11:34:20.144410    5233 fix.go:229] Guest: 2024-12-13 11:34:20.015290153 -0800 PST Remote: 2024-12-13 11:34:20.083787 -0800 PST m=+56.558492323 (delta=-68.496847ms)
	I1213 11:34:20.144420    5233 fix.go:200] guest clock delta is within tolerance: -68.496847ms
	I1213 11:34:20.144423    5233 start.go:83] releasing machines lock for "ha-224000-m02", held for 37.550011232s
	I1213 11:34:20.144441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.144584    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:20.167177    5233 out.go:177] * Found network options:
	I1213 11:34:20.188040    5233 out.go:177]   - NO_PROXY=192.169.0.6
	W1213 11:34:20.210009    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.210052    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.210927    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211209    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211385    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:34:20.211422    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	W1213 11:34:20.211452    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.211589    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:34:20.211610    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.211651    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211865    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211907    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212101    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212120    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212285    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212303    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.212458    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	W1213 11:34:20.245031    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:34:20.245108    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:34:20.305744    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:34:20.305779    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.305887    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.321917    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:34:20.330318    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:34:20.338449    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.338512    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:34:20.346961    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.355388    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:34:20.363629    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.371829    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:34:20.380410    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:34:20.388794    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:34:20.397231    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:34:20.405722    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:34:20.413168    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:34:20.413221    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:34:20.421725    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:34:20.429719    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.529241    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:34:20.543578    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.543670    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:34:20.554987    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.567690    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:34:20.581251    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.592466    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.603581    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:34:20.625283    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.635539    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.650656    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:34:20.653582    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:34:20.660675    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:34:20.674213    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:34:20.766147    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:34:20.880974    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.880996    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:34:20.895110    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.996896    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:34:23.324011    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325927019s)
	I1213 11:34:23.324083    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:34:23.334876    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.345278    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:34:23.440468    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:34:23.550842    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.658765    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:34:23.672210    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.683300    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.776286    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:34:23.841785    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:34:23.841892    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:34:23.847288    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:34:23.847368    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:34:23.850479    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:34:23.877340    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:34:23.877457    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.894304    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.933199    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:34:23.953827    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:34:23.975731    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:23.976228    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:34:23.980868    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:23.990424    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:34:23.990607    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:23.990844    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:23.990865    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.002451    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51860
	I1213 11:34:24.002790    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.003114    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.003125    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.003331    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.003469    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:34:24.003590    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:24.003653    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:34:24.004855    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:34:24.005135    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:24.005159    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.016676    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51862
	I1213 11:34:24.017013    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.017327    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.017339    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.017581    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.017710    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:34:24.017828    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.7
	I1213 11:34:24.017838    5233 certs.go:194] generating shared ca certs ...
	I1213 11:34:24.017849    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:34:24.017995    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:34:24.018055    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:34:24.018064    5233 certs.go:256] generating profile certs ...
	I1213 11:34:24.018159    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:34:24.018227    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.d29f1a5b
	I1213 11:34:24.018283    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:34:24.018291    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:34:24.018312    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:34:24.018338    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:34:24.018360    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:34:24.018382    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:34:24.018401    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:34:24.018420    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:34:24.018438    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:34:24.018527    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:34:24.018569    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:34:24.018578    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:34:24.018614    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:34:24.018649    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:34:24.018679    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:34:24.018787    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:24.018831    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.018854    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.018872    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.018902    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:34:24.018999    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:34:24.019091    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:34:24.019182    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:34:24.019261    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:34:24.046997    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:34:24.050721    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:34:24.059570    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:34:24.062693    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:34:24.071272    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:34:24.074372    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:34:24.083223    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:34:24.086307    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:34:24.095588    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:34:24.098711    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:34:24.107784    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:34:24.110902    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:34:24.120480    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:34:24.141070    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:34:24.160878    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:34:24.180920    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:34:24.200790    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:34:24.220908    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:34:24.240966    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:34:24.260343    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:34:24.279661    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:34:24.298866    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:34:24.318211    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:34:24.337602    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:34:24.351230    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:34:24.364930    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:34:24.378548    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:34:24.392045    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:34:24.405741    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:34:24.419366    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:34:24.433162    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:34:24.437460    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:34:24.446555    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449893    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449949    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.454195    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:34:24.463315    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:34:24.472398    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475806    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475869    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.480014    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:34:24.488936    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:34:24.498028    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501370    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501420    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.505749    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:34:24.514801    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:34:24.518173    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:34:24.522615    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:34:24.526939    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:34:24.531281    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:34:24.535563    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:34:24.539842    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:34:24.544160    5233 kubeadm.go:934] updating node {m02 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1213 11:34:24.544222    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:34:24.544239    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:34:24.544284    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:34:24.557092    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:34:24.557131    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:34:24.557204    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:34:24.566007    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:34:24.566093    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:34:24.575831    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:34:24.589369    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:34:24.603027    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:34:24.616380    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:34:24.619250    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:24.628866    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.726853    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.741435    5233 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:34:24.741619    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:24.762788    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:34:24.783602    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.924600    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.940595    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:34:24.940795    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:34:24.940831    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:34:24.940998    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:24.941077    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:24.941083    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:24.941090    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:24.941095    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:25.941784    5233 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I1213 11:34:25.941996    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:25.942010    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:25.942024    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:25.942031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:26.943551    5233 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I1213 11:34:26.943636    5233 node_ready.go:53] error getting node "ha-224000-m02": Get "https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02": dial tcp 192.169.0.6:8443: connect: connection refused
	I1213 11:34:26.943705    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:26.943715    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:26.943726    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:26.943733    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.736951    5233 round_trippers.go:574] Response Status: 200 OK in 6791 milliseconds
	I1213 11:34:33.738522    5233 node_ready.go:49] node "ha-224000-m02" has status "Ready":"True"
	I1213 11:34:33.738535    5233 node_ready.go:38] duration metric: took 8.794739664s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:33.738543    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:33.738582    5233 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:34:33.738592    5233 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:34:33.738642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:33.738649    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.738656    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.738661    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.750539    5233 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1213 11:34:33.759150    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.759215    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:34:33.759222    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.759229    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.759233    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.789285    5233 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1213 11:34:33.789752    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.789760    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.789766    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.789770    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799141    5233 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1213 11:34:33.799424    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.799433    5233 pod_ready.go:82] duration metric: took 40.258328ms for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799440    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799505    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:34:33.799511    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.799516    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799520    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.807914    5233 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1213 11:34:33.808397    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.808404    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.808415    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.808419    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.813376    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.813909    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.813919    5233 pod_ready.go:82] duration metric: took 14.470417ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813926    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813967    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:34:33.813972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.813978    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.813982    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.817802    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.818281    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.818288    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.818294    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.818299    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.823207    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.823485    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.823495    5233 pod_ready.go:82] duration metric: took 9.562079ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823503    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823545    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:34:33.823551    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.823557    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.823561    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.827781    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.828190    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:33.828197    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.828204    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.828207    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.831785    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.832141    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.832151    5233 pod_ready.go:82] duration metric: took 8.641657ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832159    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832202    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:34:33.832207    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.832213    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.832219    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.836265    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.939780    5233 request.go:632] Waited for 102.859328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939849    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939857    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.939865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.939871    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.946873    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:34:33.947618    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.947630    5233 pod_ready.go:82] duration metric: took 115.439259ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.947652    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.138902    5233 request.go:632] Waited for 191.1655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138938    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138982    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.138990    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.138993    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.142609    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.339564    5233 request.go:632] Waited for 196.386923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339652    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.339688    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.339702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.342232    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:34.342592    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.342602    5233 pod_ready.go:82] duration metric: took 394.853592ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.342609    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.540215    5233 request.go:632] Waited for 197.501487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540359    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540371    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.540384    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.540391    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.544062    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.740387    5233 request.go:632] Waited for 195.768993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740457    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740463    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.740470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.740474    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.742464    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:34.742759    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.742770    5233 pod_ready.go:82] duration metric: took 400.065678ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.742777    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.940360    5233 request.go:632] Waited for 197.497147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940426    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940432    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.940438    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.940442    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.942974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.139848    5233 request.go:632] Waited for 196.049551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139909    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139915    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.139922    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.139927    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.142601    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.143154    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.143165    5233 pod_ready.go:82] duration metric: took 400.297853ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.143173    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.340241    5233 request.go:632] Waited for 196.968883ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340288    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340294    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.340301    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.340305    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.344403    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:35.539580    5233 request.go:632] Waited for 194.599751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539614    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539618    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.539625    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.539628    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.541865    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.542227    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.542236    5233 pod_ready.go:82] duration metric: took 398.973916ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.542244    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.739398    5233 request.go:632] Waited for 197.024136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739550    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739562    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.739574    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.739585    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.743222    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:35.939505    5233 request.go:632] Waited for 195.770633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939554    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939560    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.939566    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.939572    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.941922    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:36.140471    5233 request.go:632] Waited for 97.089364ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140522    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140532    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.140544    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.140552    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.143672    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.339675    5233 request.go:632] Waited for 195.459387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339785    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339799    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.339811    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.339818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.344343    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:36.543195    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.543214    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.543223    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.543228    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.546614    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.740875    5233 request.go:632] Waited for 193.633171ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740939    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740951    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.740963    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.740974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.745536    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:37.043269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.043284    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.043293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.043297    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.046460    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.139384    5233 request.go:632] Waited for 92.520369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139445    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139451    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.139457    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.139461    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.141508    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.544411    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.544439    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.544458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.544464    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.548035    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.548715    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.548726    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.548734    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.548740    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.551007    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.551414    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:38.043335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.043360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.043371    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.043377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.046826    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:38.047379    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.047390    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.047397    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.047402    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.049403    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:38.543656    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.543682    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.543702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.543709    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546343    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:38.546787    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.546797    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.546803    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546807    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.548405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.043375    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.043397    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.043405    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.043409    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046060    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.046784    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.046792    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.046798    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046801    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.048453    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.543079    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.543094    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.543100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.543103    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.545426    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.545991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.545999    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.546005    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.546008    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.548059    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:40.044134    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.044192    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.044205    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.044212    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048181    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:40.048585    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.048594    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.048600    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048603    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.050402    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:40.050801    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:40.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.543772    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.543785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.543818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.547875    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:40.548358    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.548366    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.548372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.548375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.550043    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.043443    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.043501    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.043516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.043523    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.047137    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.047586    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.047593    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.047598    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.047602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.049298    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.544147    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.544170    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.544182    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.544190    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548033    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.548573    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.548581    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.548587    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548592    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.550267    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.044241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.044256    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.044264    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.044268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.046885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.047355    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.047363    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.047369    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.047373    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.049099    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.543762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.543771    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.543776    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546146    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.546521    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.546529    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.546535    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546538    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.548300    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.548618    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:43.043836    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.043862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.043875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.043884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.047393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:43.048068    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.048075    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.048082    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.048085    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.049985    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:43.544065    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.544086    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.544097    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.544117    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.547029    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:43.547638    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.547645    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.547651    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.547657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.549301    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.044961    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.044988    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.045023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.045031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.048485    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.049062    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.049070    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.049076    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.049081    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.050740    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.545903    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.545928    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.545945    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.545956    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.549955    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.550463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.550470    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.550476    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.550479    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.552158    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.552451    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:45.045945    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.045972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.045984    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.045991    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.049387    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:45.050098    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.050109    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.050117    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.050123    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.051738    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:45.544140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.544159    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.544168    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.544172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.546873    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:45.547352    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.547360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.547366    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.547370    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.548773    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.043998    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.044020    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.044032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.044038    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047292    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.047783    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.047790    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.047795    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047798    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.049310    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.544571    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.544597    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.544609    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.544616    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.548134    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.548745    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.548755    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.548762    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.548771    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.550544    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.044994    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.045015    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.045026    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.045032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.048476    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.049178    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.049189    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.049197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.049202    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.050811    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.051136    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:47.545774    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.545796    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.545809    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.545816    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.549567    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.550282    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.550292    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.550308    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.550313    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.552150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.044237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.044252    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.044262    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.044267    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.046593    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:48.047034    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.047041    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.047047    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.047051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.048719    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.544694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.544762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.544781    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.544788    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548156    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:48.548805    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.548813    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.548819    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548830    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.550405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.045819    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.045842    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.045854    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.045864    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.049109    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.049810    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.049821    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.049828    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.049834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.051675    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.052058    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:49.546343    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.546370    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.546384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.546391    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550058    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.550673    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.550684    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.550692    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550697    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.552559    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.044335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.044361    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.044373    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.044380    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.048285    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.048872    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.048879    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.048885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.048889    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.050497    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.544806    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.544862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.544875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.544885    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.548751    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.549398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.549406    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.549412    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.549416    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.550966    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.551275    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.551284    5233 pod_ready.go:82] duration metric: took 15.007121321s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551291    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551328    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:34:50.551333    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.551338    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.551343    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.553068    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.553502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.553509    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.553514    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.553517    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.555304    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.555632    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.555640    5233 pod_ready.go:82] duration metric: took 4.343987ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555647    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555686    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:34:50.555691    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.555696    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.555699    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.557601    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.557970    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:34:50.557977    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.557983    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.557986    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.559417    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.559883    5233 pod_ready.go:93] pod "kube-proxy-7b8ch" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.559891    5233 pod_ready.go:82] duration metric: took 4.238545ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559899    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559932    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:34:50.559949    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.559956    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.559960    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.562004    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:50.562348    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:50.562356    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.562361    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.562365    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.563914    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.564222    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.564231    5233 pod_ready.go:82] duration metric: took 4.326466ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564237    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:34:50.564274    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.564280    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.564293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.565929    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.566322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.566328    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.566334    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.566337    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.567867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.568197    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.568208    5233 pod_ready.go:82] duration metric: took 3.96239ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.568215    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.745519    5233 request.go:632] Waited for 177.216442ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745569    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745584    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.745599    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.745607    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.748965    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.946816    5233 request.go:632] Waited for 197.362494ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946935    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946944    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.946958    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.946964    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.950494    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.950832    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.950846    5233 pod_ready.go:82] duration metric: took 382.598257ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.950855    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.146433    5233 request.go:632] Waited for 195.515852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146519    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146528    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.146539    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.146545    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.150256    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.346180    5233 request.go:632] Waited for 195.336158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346304    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346314    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.346325    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.346333    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.350059    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.350701    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.350714    5233 pod_ready.go:82] duration metric: took 399.82535ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.350723    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.546175    5233 request.go:632] Waited for 195.389456ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546301    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546322    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.546341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.546357    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.549469    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.745754    5233 request.go:632] Waited for 195.890122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745865    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745871    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.745877    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.745881    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.747825    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:51.748179    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.748191    5233 pod_ready.go:82] duration metric: took 397.435321ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.748198    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.945402    5233 request.go:632] Waited for 197.127949ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945442    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945447    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.945453    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.945457    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.948002    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:52.146346    5233 request.go:632] Waited for 197.812373ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146446    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146458    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.146470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.146477    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.150176    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.150503    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:52.150514    5233 pod_ready.go:82] duration metric: took 402.286111ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:52.150525    5233 pod_ready.go:39] duration metric: took 18.409559513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:52.150552    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:34:52.150642    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.164316    5233 api_server.go:72] duration metric: took 27.417579599s to wait for apiserver process to appear ...
	I1213 11:34:52.164330    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:34:52.164347    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:34:52.168889    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:34:52.168929    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:34:52.168934    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.168946    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.168950    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.169508    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:34:52.169593    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:34:52.169605    5233 api_server.go:131] duration metric: took 5.269383ms to wait for apiserver health ...
	I1213 11:34:52.169610    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:34:52.346116    5233 request.go:632] Waited for 176.438003ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346270    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.346282    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.346288    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.351411    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:34:52.356738    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:34:52.356755    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.356759    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.356761    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.356765    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.356768    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.356771    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.356774    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.356776    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.356780    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.356782    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.356785    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.356788    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.356791    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.356793    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.356796    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.356799    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.356802    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.356804    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.356807    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.356810    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.356813    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.356815    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.356818    5233 system_pods.go:61] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.356821    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.356823    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.356826    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.356830    5233 system_pods.go:74] duration metric: took 187.204101ms to wait for pod list to return data ...
	I1213 11:34:52.356836    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:34:52.547123    5233 request.go:632] Waited for 190.17926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547175    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547184    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.547197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.547205    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.550987    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.551153    5233 default_sa.go:45] found service account: "default"
	I1213 11:34:52.551169    5233 default_sa.go:55] duration metric: took 194.315508ms for default service account to be created ...
	I1213 11:34:52.551177    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:34:52.745633    5233 request.go:632] Waited for 194.336495ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745749    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745782    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.745804    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.745815    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.750592    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:52.755864    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:34:52.755877    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.755881    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.755884    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.755887    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.755890    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.755893    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.755896    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.755899    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.755902    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.755905    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.755908    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.755911    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.755914    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.755917    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.755919    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.755923    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.755926    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.755929    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.755932    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.755935    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.755938    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.755941    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.755944    5233 system_pods.go:89] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.755946    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.755952    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.755956    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.755960    5233 system_pods.go:126] duration metric: took 204.766483ms to wait for k8s-apps to be running ...
	I1213 11:34:52.755970    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:34:52.756038    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:34:52.767749    5233 system_svc.go:56] duration metric: took 11.776634ms WaitForService to wait for kubelet
	I1213 11:34:52.767765    5233 kubeadm.go:582] duration metric: took 28.020992834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:34:52.767792    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:34:52.945101    5233 request.go:632] Waited for 177.223908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945150    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945158    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.945170    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.945176    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.949117    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.950061    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950074    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950086    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950090    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950094    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950097    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950099    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950102    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950105    5233 node_conditions.go:105] duration metric: took 182.296841ms to run NodePressure ...
	I1213 11:34:52.950114    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:34:52.950132    5233 start.go:255] writing updated cluster config ...
	I1213 11:34:52.972494    5233 out.go:201] 
	I1213 11:34:52.993694    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:52.993820    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.016586    5233 out.go:177] * Starting "ha-224000-m03" control-plane node in "ha-224000" cluster
	I1213 11:34:53.090440    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:34:53.090478    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:34:53.090696    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:34:53.090718    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:34:53.090850    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.091713    5233 start.go:360] acquireMachinesLock for ha-224000-m03: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:34:53.091822    5233 start.go:364] duration metric: took 84.906µs to acquireMachinesLock for "ha-224000-m03"
	I1213 11:34:53.091846    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:34:53.091854    5233 fix.go:54] fixHost starting: m03
	I1213 11:34:53.092290    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:53.092327    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:53.104639    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I1213 11:34:53.104960    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:53.105280    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:53.105294    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:53.105531    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:53.105628    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.105732    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetState
	I1213 11:34:53.105817    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.105891    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 4216
	I1213 11:34:53.107018    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.107070    5233 fix.go:112] recreateIfNeeded on ha-224000-m03: state=Stopped err=<nil>
	I1213 11:34:53.107090    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	W1213 11:34:53.107166    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:34:53.128583    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m03" ...
	I1213 11:34:53.170463    5233 main.go:141] libmachine: (ha-224000-m03) Calling .Start
	I1213 11:34:53.170757    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.170820    5233 main.go:141] libmachine: (ha-224000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid
	I1213 11:34:53.173341    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.173354    5233 main.go:141] libmachine: (ha-224000-m03) DBG | pid 4216 is in state "Stopped"
	I1213 11:34:53.173370    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid...
	I1213 11:34:53.173814    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Using UUID a949994f-ed60-4f04-8e19-b8e4ec0a7cc4
	I1213 11:34:53.198944    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Generated MAC a6:90:90:dd:31:4c
	I1213 11:34:53.198971    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:34:53.199150    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199192    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:34:53.199276    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a949994f-ed60-4f04-8e19-b8e4ec0a7cc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:34:53.199299    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:34:53.201829    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Pid is 5320
	I1213 11:34:53.202230    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Attempt 0
	I1213 11:34:53.202250    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.202308    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 5320
	I1213 11:34:53.203502    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Searching for a6:90:90:dd:31:4c in /var/db/dhcpd_leases ...
	I1213 11:34:53.203593    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:34:53.203623    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:34:53.203647    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:34:53.203666    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:34:53.203681    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:34:53.203694    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found match: a6:90:90:dd:31:4c
	I1213 11:34:53.203705    5233 main.go:141] libmachine: (ha-224000-m03) DBG | IP: 192.169.0.8
	I1213 11:34:53.203714    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetConfigRaw
	I1213 11:34:53.204410    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:34:53.204623    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.205075    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:34:53.205084    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.205213    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:34:53.205302    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:34:53.205398    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205497    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205650    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:34:53.205789    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:53.205928    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:34:53.205935    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:34:53.212601    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:34:53.221560    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:34:53.222531    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.222558    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.222580    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.222599    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.612220    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:34:53.612234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:34:53.727037    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.727057    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.727094    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.727117    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.727874    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:34:53.727886    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:34:59.521710    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:34:59.521832    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:34:59.521841    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:34:59.545358    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:35:28.268303    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:35:28.268318    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268453    5233 buildroot.go:166] provisioning hostname "ha-224000-m03"
	I1213 11:35:28.268464    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.268633    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.268718    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268794    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268890    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.269047    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.269192    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.269201    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m03 && echo "ha-224000-m03" | sudo tee /etc/hostname
	I1213 11:35:28.331907    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m03
	
	I1213 11:35:28.331923    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.332060    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.332169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332280    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332367    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.332526    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.332658    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.332669    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:28.389916    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:28.389931    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:35:28.389961    5233 buildroot.go:174] setting up certificates
	I1213 11:35:28.389971    5233 provision.go:84] configureAuth start
	I1213 11:35:28.389982    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.390117    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:28.390208    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.390313    5233 provision.go:143] copyHostCerts
	I1213 11:35:28.390344    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390394    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:35:28.390401    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390555    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:35:28.390787    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390820    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:35:28.390825    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390910    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:35:28.391077    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391106    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:35:28.391111    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391228    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:35:28.391418    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m03 san=[127.0.0.1 192.169.0.8 ha-224000-m03 localhost minikube]
	I1213 11:35:28.615259    5233 provision.go:177] copyRemoteCerts
	I1213 11:35:28.615322    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:28.615337    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.615483    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.615599    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.615704    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.615808    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:28.648163    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:35:28.648235    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:35:28.668111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:35:28.668178    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:28.688091    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:35:28.688163    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:28.707920    5233 provision.go:87] duration metric: took 317.933618ms to configureAuth
	I1213 11:35:28.707937    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:35:28.708107    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:28.708120    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:28.708271    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.708384    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.708472    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708567    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708672    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.708792    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.708915    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.708923    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:35:28.759762    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:35:28.759775    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:35:28.759854    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:35:28.759870    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.760005    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.760093    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760190    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760274    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.760438    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.760606    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.760655    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:35:28.823874    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:35:28.823891    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.824044    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.824161    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824266    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824376    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.824572    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.824732    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.824746    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:35:30.486456    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:35:30.486475    5233 machine.go:96] duration metric: took 37.280827239s to provisionDockerMachine
	I1213 11:35:30.486485    5233 start.go:293] postStartSetup for "ha-224000-m03" (driver="hyperkit")
	I1213 11:35:30.486499    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:30.486509    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.486716    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:30.486731    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.486828    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.486916    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.487008    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.487103    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.519400    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:30.522965    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:35:30.522976    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:35:30.523076    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:35:30.523222    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:35:30.523229    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:35:30.523407    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:30.531672    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:30.550850    5233 start.go:296] duration metric: took 64.356166ms for postStartSetup
	I1213 11:35:30.550875    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.551059    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:35:30.551072    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.551169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.551256    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.551369    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.551457    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.583546    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:35:30.583619    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:35:30.638958    5233 fix.go:56] duration metric: took 37.546530399s for fixHost
	I1213 11:35:30.638984    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.639131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.639231    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639317    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639400    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.639557    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:30.639690    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:30.639697    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:35:30.691357    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118530.813836388
	
	I1213 11:35:30.691371    5233 fix.go:216] guest clock: 1734118530.813836388
	I1213 11:35:30.691376    5233 fix.go:229] Guest: 2024-12-13 11:35:30.813836388 -0800 PST Remote: 2024-12-13 11:35:30.638973 -0800 PST m=+127.105464891 (delta=174.863388ms)
	I1213 11:35:30.691387    5233 fix.go:200] guest clock delta is within tolerance: 174.863388ms
	I1213 11:35:30.691390    5233 start.go:83] releasing machines lock for "ha-224000-m03", held for 37.598987831s
	I1213 11:35:30.691409    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.691545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:30.716697    5233 out.go:177] * Found network options:
	I1213 11:35:30.736372    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7
	W1213 11:35:30.757863    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.757920    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.757939    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.758810    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759058    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759249    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:30.759286    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	W1213 11:35:30.759290    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.759313    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.759449    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:35:30.759471    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.759537    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759655    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759708    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759905    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759938    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760152    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.760321    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	W1213 11:35:30.790341    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:30.790425    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:30.835439    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:35:30.835453    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:30.835523    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.850635    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:35:30.858947    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:35:30.867636    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.867708    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:35:30.876811    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.885325    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:35:30.893786    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.902226    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.910790    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:35:30.919236    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:35:30.927803    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:35:30.936377    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.943894    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:35:30.943955    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:35:30.952569    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.959891    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061578    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:35:31.081433    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:31.081517    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:35:31.100335    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.112429    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:31.127499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.138533    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.148917    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:35:31.174782    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.184889    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:31.201805    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:35:31.204856    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:35:31.212060    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:35:31.225973    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:35:31.326706    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:35:31.431909    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:35:31.431936    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:35:31.446011    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.546239    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:35:33.884526    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338279376s)
	I1213 11:35:33.884605    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:35:33.896180    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:33.907512    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:35:34.018152    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:35:34.117342    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.216289    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:35:34.229723    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:34.241050    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.333405    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:35:34.400848    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:35:34.400950    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:35:34.406614    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:35:34.406682    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:35:34.409985    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:35:34.437608    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:35:34.437696    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.456769    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.499545    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:35:34.556752    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:35:34.577782    5233 out.go:177]   - env NO_PROXY=192.169.0.6,192.169.0.7
	I1213 11:35:34.598561    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:34.598902    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:34.602518    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:34.612856    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:35:34.613037    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:34.613269    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.613292    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.625281    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51891
	I1213 11:35:34.625655    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.626009    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.626025    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.626248    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.626340    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:35:34.626428    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:35:34.626490    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:35:34.627676    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:35:34.627955    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.627988    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.640060    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51893
	I1213 11:35:34.640392    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.640716    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.640735    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.640975    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.641081    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:35:34.641190    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.8
	I1213 11:35:34.641199    5233 certs.go:194] generating shared ca certs ...
	I1213 11:35:34.641214    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:34.641369    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:35:34.641440    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:35:34.641449    5233 certs.go:256] generating profile certs ...
	I1213 11:35:34.641547    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:35:34.641650    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.f4268d28
	I1213 11:35:34.641704    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:35:34.641711    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:35:34.641732    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:35:34.641753    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:35:34.641772    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:35:34.641790    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:35:34.641809    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:35:34.641828    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:35:34.641845    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:35:34.641926    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:35:34.641977    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:34.641992    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:34.642032    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:35:34.642067    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:34.642096    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:34.642163    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:34.642196    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:34.642223    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:35:34.642243    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:35:34.642269    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:35:34.642361    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:35:34.642463    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:35:34.642554    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:35:34.642635    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:35:34.669703    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:35:34.673030    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:35:34.682641    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:35:34.686133    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:35:34.695208    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:35:34.698292    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:35:34.708147    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:35:34.711343    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:35:34.720522    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:35:34.723933    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:35:34.733200    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:35:34.736904    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:35:34.748040    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:34.768078    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:35:34.787823    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:34.807347    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:34.827367    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:35:34.847452    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:35:34.866717    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:34.886226    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:34.905392    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:34.924502    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:35:34.944848    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:35:34.964162    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:35:34.977883    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:35:34.991483    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:35:35.005083    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:35:35.018833    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:35:35.033559    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:35:35.047330    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:35:35.060953    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:35:35.065093    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:35:35.074224    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077601    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077646    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.081873    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:35:35.091167    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:35:35.100351    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103730    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103786    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.107944    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:35:35.116996    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:35:35.126132    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129577    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129642    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.133859    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:35:35.143102    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:35.146630    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:35.150908    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:35.155104    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:35.159301    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:35.163626    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:35.167845    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:35.172217    5233 kubeadm.go:934] updating node {m03 192.169.0.8 8443 v1.31.2 docker true true} ...
	I1213 11:35:35.172277    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:35.172296    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:35:35.172356    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:35:35.190873    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:35:35.190925    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:35:35.191004    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:35:35.201615    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:35:35.201692    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:35:35.209907    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:35:35.223540    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:35.237211    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:35:35.251084    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:35.254255    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:35.264617    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.363941    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.379515    5233 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:35:35.379713    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:35.453014    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:35:35.489942    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.641418    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.655240    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:35:35.655455    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:35:35.655497    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:35:35.655667    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:35.655710    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:35.655716    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:35.655722    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:35.655726    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:35.658541    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.157140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:36.157157    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.157163    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.157167    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.159862    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.160261    5233 node_ready.go:49] node "ha-224000-m03" has status "Ready":"True"
	I1213 11:35:36.160270    5233 node_ready.go:38] duration metric: took 504.598087ms for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:36.160277    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:35:36.160322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:35:36.160332    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.160339    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.160345    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.164741    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:35:36.170442    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:36.170504    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.170510    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.170516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.170519    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.172921    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.173369    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.173377    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.173383    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.173390    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.175266    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:36.671483    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.671501    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.671508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.671513    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.674268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.675049    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.675058    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.675065    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.675069    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.678278    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:37.170684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.170697    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.170703    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.170706    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.173103    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.173639    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.173649    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.173659    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.173663    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.175563    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:37.670841    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.670859    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.670867    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.670870    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.673709    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.674599    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.674609    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.674616    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.674619    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.677468    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.171983    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.172002    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.172010    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.172014    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.174562    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.175168    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.175176    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.175183    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.175186    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.177058    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:38.177428    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:38.671814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.671831    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.671839    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.671843    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.674211    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.674978    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.674987    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.674994    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.675005    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.677077    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.171353    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.171371    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.171379    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.171383    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.173885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.174765    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.174780    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.174787    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.174791    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.176969    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.672084    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.672101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.672107    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.672111    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.674182    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.674701    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.674709    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.674715    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.674719    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.676491    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.170778    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.170793    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.170801    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.170805    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.172716    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.173201    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.173209    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.173215    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.173218    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.174782    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.670537    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.670554    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.670561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.670564    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.672905    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:40.673371    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.673378    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.673384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.673388    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.675334    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.675698    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:41.170540    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.170555    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.170561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.170565    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.172610    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:41.173071    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.173079    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.173086    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.173090    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.174669    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.670954    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.670970    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.670977    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.670980    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.672906    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.673327    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.673335    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.673341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.673346    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.674840    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.171591    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.171607    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.171614    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.171626    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.173848    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.174323    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.174331    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.174336    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.174339    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.176072    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.670670    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.670685    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.670691    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.670695    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.672916    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.673334    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.673342    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.673348    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.673352    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.674953    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.171018    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.171035    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.171041    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.171044    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.173500    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.173933    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.173942    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.173948    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.173952    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.175797    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.176282    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:43.671883    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.671900    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.671909    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.671914    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.674489    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.674937    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.674945    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.674952    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.674959    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.676652    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.171731    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.171747    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.171754    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.171757    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174220    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:44.174839    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.174847    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.174853    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174858    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.176592    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.671463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.671523    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.671535    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.671543    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.674700    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:44.675156    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.675163    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.675169    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.675172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.676845    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:45.170845    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.170871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.170883    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.170890    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.174136    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:45.174847    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.174855    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.174861    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.174865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.177051    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.177329    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:45.671539    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.671565    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.671577    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.671584    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.674504    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.674930    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.674937    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.674944    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.674948    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.676902    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.171017    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.171043    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.171055    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.171064    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.174349    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:46.175105    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.175113    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.175119    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.175123    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.176671    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.670718    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.670742    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.670753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.670760    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.673727    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:46.674143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.674150    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.674155    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.674159    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.675697    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.171141    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.171167    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.171181    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.171188    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.174674    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:47.175237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.175248    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.175256    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.175283    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.177291    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.177630    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:47.670502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.670539    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.670550    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.670555    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.673105    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:47.673592    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.673603    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.673624    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.673631    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.675150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.170714    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.170743    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.170753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.170759    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.174068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:48.174871    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.174879    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.174885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.174888    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.176423    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.671508    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.671547    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.671558    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.671563    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.673769    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:48.674261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.674268    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.674274    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.674276    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.676263    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.170991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.171006    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.171015    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.171020    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173356    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.173868    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.173876    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.173882    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173893    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.175974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.671308    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.671349    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.671359    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.671375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.674049    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.674657    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.674666    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.674672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.674676    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.676408    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.676866    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:50.170526    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.170546    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.170555    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.170560    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.172951    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.173418    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.173454    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.173462    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.173467    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.175187    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:50.671268    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.671306    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.671315    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.671319    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.673518    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.674124    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.674132    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.674139    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.674142    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.675972    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.172292    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.172318    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.172329    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.172335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.175388    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:51.176242    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.176250    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.176255    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.176271    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.178034    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.672241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.672259    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.672268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.672273    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.674716    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:51.675171    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.675178    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.675184    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.675187    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.677031    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.677333    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:52.171324    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.171350    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.171394    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.171403    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.174624    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:52.175339    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.175347    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.175353    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.175356    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.176912    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.672143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.672156    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.672163    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.672166    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674142    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.674648    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.674656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.674662    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674665    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.676343    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.171789    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.171834    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.171845    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.171850    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.173997    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.174633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.174641    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.174647    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.174652    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.176489    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.671689    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.671702    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.671708    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.674629    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.675317    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.675324    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.675330    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.675335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.677039    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.677545    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:54.172269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.172296    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.172309    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.172316    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.175190    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:54.175863    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.175871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.175880    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.175884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.177695    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:54.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.671656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.671679    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.671687    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.674858    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:54.675633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.675644    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.675652    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.675659    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.677622    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.172159    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.172183    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.172195    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.172200    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.175352    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.175951    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.175961    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.175969    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.175974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.177826    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.672525    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.672548    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.672561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.672568    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676200    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.676655    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.676663    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.676669    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.679603    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.680007    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.680026    5233 pod_ready.go:82] duration metric: took 19.509731372s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680040    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680088    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:35:55.680094    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.680100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.680104    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.682544    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.683008    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.683017    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.683023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.683027    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.684867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.685203    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.685212    5233 pod_ready.go:82] duration metric: took 5.165234ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685222    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685259    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:35:55.685264    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.685270    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.685274    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.687013    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.687444    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.687452    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.687458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.687463    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689192    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.689502    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.689510    5233 pod_ready.go:82] duration metric: took 4.282723ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689517    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689546    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:35:55.689551    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.689557    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691520    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.691918    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:55.691926    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.691932    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691935    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.693585    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.694009    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.694017    5233 pod_ready.go:82] duration metric: took 4.494586ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694023    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694061    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:35:55.694066    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.694071    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.694074    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.696047    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.696583    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:55.696591    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.696597    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.696602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.698695    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.699182    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.699191    5233 pod_ready.go:82] duration metric: took 5.162024ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.699204    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.873308    5233 request.go:632] Waited for 174.059147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873409    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.873420    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.873432    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.877057    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.073941    5233 request.go:632] Waited for 196.465756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073990    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073998    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.074007    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.074015    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.076268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.076663    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.076673    5233 pod_ready.go:82] duration metric: took 377.466982ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.076681    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.272907    5233 request.go:632] Waited for 196.189621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272950    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272958    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.272967    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.272973    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.275118    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.473781    5233 request.go:632] Waited for 198.215756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.473825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.473834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.476052    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.476328    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.476337    5233 pod_ready.go:82] duration metric: took 399.655338ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.476344    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.672963    5233 request.go:632] Waited for 196.573548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673025    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673042    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.673069    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.673082    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.676053    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.874041    5233 request.go:632] Waited for 197.242072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874093    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.874112    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.874148    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.877393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.877917    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.877925    5233 pod_ready.go:82] duration metric: took 401.579167ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.877932    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.072677    5233 request.go:632] Waited for 194.687466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072807    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.072829    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.072837    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.076583    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:57.273280    5233 request.go:632] Waited for 195.960523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273356    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273364    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.273372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.273377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.275590    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:57.275864    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.275873    5233 pod_ready.go:82] duration metric: took 397.938639ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.275887    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.473240    5233 request.go:632] Waited for 197.314418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473276    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473282    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.473288    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.473293    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.479318    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:35:57.672800    5233 request.go:632] Waited for 192.751323ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672854    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672865    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.672879    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.672883    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.674679    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:57.674953    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.674964    5233 pod_ready.go:82] duration metric: took 399.075588ms for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.674971    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.872629    5233 request.go:632] Waited for 197.615913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872690    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.872698    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.872704    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.875523    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.072684    5233 request.go:632] Waited for 196.666527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072801    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072814    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.072825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.072835    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.076186    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.076572    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.076584    5233 pod_ready.go:82] duration metric: took 401.611001ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.076594    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.272566    5233 request.go:632] Waited for 195.927789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272623    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272631    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.272639    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.272646    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.275090    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.473816    5233 request.go:632] Waited for 198.141217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473894    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473905    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.473916    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.473922    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.476808    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.477275    5233 pod_ready.go:98] node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477286    5233 pod_ready.go:82] duration metric: took 400.69023ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	E1213 11:35:58.477294    5233 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477302    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.672582    5233 request.go:632] Waited for 195.231932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672629    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672638    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.672649    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.672657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.676219    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.873974    5233 request.go:632] Waited for 197.337714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874026    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874034    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.874045    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.874051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.877592    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.877988    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.878000    5233 pod_ready.go:82] duration metric: took 400.696273ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.878009    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.073381    5233 request.go:632] Waited for 195.314343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073433    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073441    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.073449    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.073455    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.075792    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.273216    5233 request.go:632] Waited for 196.949491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273267    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273283    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.273292    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.273298    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.275702    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.276247    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.276258    5233 pod_ready.go:82] duration metric: took 398.245999ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.276265    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.473693    5233 request.go:632] Waited for 197.370074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473831    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473842    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.473854    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.473862    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.477420    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.672646    5233 request.go:632] Waited for 194.659895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672759    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672771    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.672784    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.672794    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.676016    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.676434    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.676444    5233 pod_ready.go:82] duration metric: took 400.177932ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.676451    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.873284    5233 request.go:632] Waited for 196.790328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873409    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873424    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.873437    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.873446    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.876647    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.072905    5233 request.go:632] Waited for 195.872865ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073011    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073019    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.073028    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.073032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.076068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.076488    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.076498    5233 pod_ready.go:82] duration metric: took 400.046456ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.076506    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.273249    5233 request.go:632] Waited for 196.676645ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273361    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273380    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.273405    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.273414    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.276870    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.473222    5233 request.go:632] Waited for 195.664041ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473283    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473291    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.473300    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.473304    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.475794    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:36:00.476078    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.476087    5233 pod_ready.go:82] duration metric: took 399.579687ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.476096    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.674009    5233 request.go:632] Waited for 197.794547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674081    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674092    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.674106    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.674121    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.677780    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.873417    5233 request.go:632] Waited for 194.907567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873476    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873488    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.873500    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.873508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.876715    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.877199    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.877213    5233 pod_ready.go:82] duration metric: took 401.11429ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.877234    5233 pod_ready.go:39] duration metric: took 24.717168247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:36:00.877249    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:36:00.877335    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:00.889500    5233 api_server.go:72] duration metric: took 25.510179125s to wait for apiserver process to appear ...
	I1213 11:36:00.889514    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:36:00.889525    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:36:00.892661    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:36:00.892694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:36:00.892700    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.892706    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.892710    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.893221    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:36:00.893255    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:36:00.893263    5233 api_server.go:131] duration metric: took 3.744726ms to wait for apiserver health ...
	I1213 11:36:00.893268    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:36:01.073160    5233 request.go:632] Waited for 179.837088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073311    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073322    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.073333    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.073340    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.081092    5233 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1213 11:36:01.086508    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:36:01.086526    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.086530    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.086533    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.086543    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.086547    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.086550    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.086553    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.086555    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.086559    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.086565    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.086569    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.086572    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.086575    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.086579    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.086582    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.086585    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.086588    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.086591    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.086593    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.086596    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.086600    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.086602    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.086606    5233 system_pods.go:61] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.086609    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.086612    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.086616    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.086622    5233 system_pods.go:74] duration metric: took 193.351906ms to wait for pod list to return data ...
	I1213 11:36:01.086629    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:36:01.272667    5233 request.go:632] Waited for 185.987795ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272763    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272774    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.272785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.272793    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.276315    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.276400    5233 default_sa.go:45] found service account: "default"
	I1213 11:36:01.276412    5233 default_sa.go:55] duration metric: took 189.780655ms for default service account to be created ...
	I1213 11:36:01.276419    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:36:01.473526    5233 request.go:632] Waited for 197.034094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473601    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473653    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.473672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.473680    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.479025    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:36:01.484476    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:36:01.484489    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.484495    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.484499    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.484502    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.484506    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.484508    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.484511    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.484516    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.484518    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.484522    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.484524    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.484527    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.484531    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.484534    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.484538    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.484540    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.484543    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.484546    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.484549    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.484552    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.484555    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.484558    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.484561    5233 system_pods.go:89] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.484563    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.484567    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.484571    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.484576    5233 system_pods.go:126] duration metric: took 208.153776ms to wait for k8s-apps to be running ...
	I1213 11:36:01.484587    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:36:01.484655    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:36:01.495689    5233 system_svc.go:56] duration metric: took 11.101939ms WaitForService to wait for kubelet
	I1213 11:36:01.495712    5233 kubeadm.go:582] duration metric: took 26.116392116s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:36:01.495725    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:36:01.673624    5233 request.go:632] Waited for 177.853394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673726    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673737    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.673747    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.673785    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.677584    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.678344    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678354    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678360    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678364    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678367    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678369    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678372    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678375    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678378    5233 node_conditions.go:105] duration metric: took 182.650917ms to run NodePressure ...
	I1213 11:36:01.678389    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:36:01.678404    5233 start.go:255] writing updated cluster config ...
	I1213 11:36:01.701519    5233 out.go:201] 
	I1213 11:36:01.755040    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:01.755118    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.792739    5233 out.go:177] * Starting "ha-224000-m04" worker node in "ha-224000" cluster
	I1213 11:36:01.850695    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:36:01.850719    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:36:01.850830    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:36:01.850840    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:36:01.850919    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.851367    5233 start.go:360] acquireMachinesLock for ha-224000-m04: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:36:01.851417    5233 start.go:364] duration metric: took 38.664µs to acquireMachinesLock for "ha-224000-m04"
	I1213 11:36:01.851430    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:36:01.851435    5233 fix.go:54] fixHost starting: m04
	I1213 11:36:01.851670    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:36:01.851689    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:36:01.863548    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I1213 11:36:01.863864    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:36:01.864237    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:36:01.864251    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:36:01.864489    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:36:01.864595    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.864718    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:36:01.864801    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.864873    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 4360
	I1213 11:36:01.866047    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid 4360 missing from process table
	I1213 11:36:01.866070    5233 fix.go:112] recreateIfNeeded on ha-224000-m04: state=Stopped err=<nil>
	I1213 11:36:01.866083    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	W1213 11:36:01.866170    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:36:01.886701    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m04" ...
	I1213 11:36:01.927945    5233 main.go:141] libmachine: (ha-224000-m04) Calling .Start
	I1213 11:36:01.928215    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.928249    5233 main.go:141] libmachine: (ha-224000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid
	I1213 11:36:01.928315    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Using UUID 3aa2edb2-289d-46e2-9534-1f9a2dff1012
	I1213 11:36:01.954122    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Generated MAC e2:d2:09:69:a8:b4
	I1213 11:36:01.954144    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:36:01.954348    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954378    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954426    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3aa2edb2-289d-46e2-9534-1f9a2dff1012", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:36:01.954465    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3aa2edb2-289d-46e2-9534-1f9a2dff1012 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:36:01.954478    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:36:01.956069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Pid is 5375
	I1213 11:36:01.956512    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Attempt 0
	I1213 11:36:01.956527    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.956630    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:36:01.959334    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Searching for e2:d2:09:69:a8:b4 in /var/db/dhcpd_leases ...
	I1213 11:36:01.959473    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:36:01.959490    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c9a76}
	I1213 11:36:01.959506    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:36:01.959522    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:36:01.959533    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:36:01.959548    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found match: e2:d2:09:69:a8:b4
	I1213 11:36:01.959568    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetConfigRaw
	I1213 11:36:01.959573    5233 main.go:141] libmachine: (ha-224000-m04) DBG | IP: 192.169.0.9
	I1213 11:36:01.960365    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:01.960553    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.960997    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:36:01.961019    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.961190    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:01.961347    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:01.961451    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961542    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961646    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:01.961799    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:01.961972    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:01.961979    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:36:01.968096    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:36:01.976979    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:36:01.978042    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:01.978064    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:01.978076    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:01.978087    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.370264    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:36:02.370282    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:36:02.485027    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:02.485059    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:02.485069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:02.485077    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.485882    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:36:02.485893    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:36:08.339296    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:36:08.339331    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:36:08.339343    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:36:08.362659    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:36:37.019941    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:36:37.019956    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020079    5233 buildroot.go:166] provisioning hostname "ha-224000-m04"
	I1213 11:36:37.020091    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020181    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.020268    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.020362    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020446    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020550    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.020691    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.020850    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.020859    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m04 && echo "ha-224000-m04" | sudo tee /etc/hostname
	I1213 11:36:37.079455    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m04
	
	I1213 11:36:37.079470    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.079611    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.079712    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079807    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079899    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.080050    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.080202    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.080213    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:36:37.138441    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:36:37.138458    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:36:37.138471    5233 buildroot.go:174] setting up certificates
	I1213 11:36:37.138478    5233 provision.go:84] configureAuth start
	I1213 11:36:37.138489    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.138635    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:37.138758    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.138874    5233 provision.go:143] copyHostCerts
	I1213 11:36:37.138906    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.138980    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:36:37.138987    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.139126    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:36:37.139340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139389    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:36:37.139394    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139490    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:36:37.139651    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139700    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:36:37.139705    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139785    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:36:37.139956    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m04 san=[127.0.0.1 192.169.0.9 ha-224000-m04 localhost minikube]
	I1213 11:36:37.316710    5233 provision.go:177] copyRemoteCerts
	I1213 11:36:37.316783    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:36:37.316812    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.316958    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.317051    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.317152    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.317246    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:37.347920    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:36:37.347992    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:36:37.367331    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:36:37.367418    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:36:37.387377    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:36:37.387449    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:36:37.407116    5233 provision.go:87] duration metric: took 268.631983ms to configureAuth
	I1213 11:36:37.407131    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:36:37.407332    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:37.407364    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:37.407494    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.407580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.407680    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407756    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407841    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.407978    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.408110    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.408119    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:36:37.455460    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:36:37.455475    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:36:37.455568    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:36:37.455579    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.455716    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.455822    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.455928    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.456017    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.456183    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.456322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.456371    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:36:37.514210    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	Environment=NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:36:37.514229    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.514369    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.514460    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514608    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514700    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.514873    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.515015    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.515027    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:36:39.106697    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:36:39.106713    5233 machine.go:96] duration metric: took 37.146099544s to provisionDockerMachine
	I1213 11:36:39.106722    5233 start.go:293] postStartSetup for "ha-224000-m04" (driver="hyperkit")
	I1213 11:36:39.106729    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:36:39.106741    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.106958    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:36:39.106972    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.107076    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.107171    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.107250    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.107377    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.137664    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:36:39.140876    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:36:39.140886    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:36:39.140989    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:36:39.141205    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:36:39.141216    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:36:39.141482    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:36:39.148686    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:36:39.168356    5233 start.go:296] duration metric: took 61.625015ms for postStartSetup
	I1213 11:36:39.168377    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.168566    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:36:39.168580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.168694    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.168784    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.168873    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.168955    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.200288    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:36:39.200368    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:36:39.252642    5233 fix.go:56] duration metric: took 37.401602513s for fixHost
	I1213 11:36:39.252667    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.252828    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.252931    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253035    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253138    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.253294    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:39.253427    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:39.253435    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:36:39.303241    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118599.429050956
	
	I1213 11:36:39.303262    5233 fix.go:216] guest clock: 1734118599.429050956
	I1213 11:36:39.303272    5233 fix.go:229] Guest: 2024-12-13 11:36:39.429050956 -0800 PST Remote: 2024-12-13 11:36:39.252657 -0800 PST m=+195.719809020 (delta=176.393956ms)
	I1213 11:36:39.303284    5233 fix.go:200] guest clock delta is within tolerance: 176.393956ms
	I1213 11:36:39.303287    5233 start.go:83] releasing machines lock for "ha-224000-m04", held for 37.452264193s
	I1213 11:36:39.303304    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.303439    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:39.324718    5233 out.go:177] * Found network options:
	I1213 11:36:39.345593    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	W1213 11:36:39.367406    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367428    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367438    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.367453    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367872    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367964    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.368045    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:36:39.368067    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	W1213 11:36:39.368071    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368083    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368091    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.368153    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:36:39.368162    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368165    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.368280    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368311    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368396    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368417    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368502    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368516    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.368581    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	W1213 11:36:39.395349    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:36:39.395429    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:36:39.444914    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:36:39.444929    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.445000    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.460519    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:36:39.468747    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:36:39.476970    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:36:39.477028    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:36:39.485250    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.493728    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:36:39.501920    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.510067    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:36:39.518621    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:36:39.527064    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:36:39.535503    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:36:39.544105    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:36:39.551996    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:36:39.552057    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:36:39.560903    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:36:39.569057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:39.663026    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:36:39.681615    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.681707    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:36:39.701692    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.713515    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:36:39.733157    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.744420    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.755241    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:36:39.778169    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.788619    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.803742    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:36:39.806753    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:36:39.814222    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:36:39.828173    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:36:39.923220    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:36:40.025879    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:36:40.025908    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:36:40.040057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:40.139577    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:37:41.169349    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.030424073s)
	I1213 11:37:41.169444    5233 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 11:37:41.204399    5233 out.go:201] 
	W1213 11:37:41.225442    5233 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 13 19:36:37 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427068027Z" level=info msg="Starting up"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427760840Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.428340753Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.446225003Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461418150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461538159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461607016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461644040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461775643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461826393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461966604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462007624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462040126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462069720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462182838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462429601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464011795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464067757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464257837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464302280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464410649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464463860Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465390367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465443699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465555213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465597957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465634744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465705067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465941498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466071120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466113283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466145023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466176156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466211240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466250495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466285590Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466317193Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466347259Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466376937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466407325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466446395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466488362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466530329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466566314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466607503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466641823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466672212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466702609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466732812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466764575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466794248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466823748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466854140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466886668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466935305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466981167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467011716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467066705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467101883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467131499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467160087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467188157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467216598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467244211Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467402488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467606858Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467674178Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467711081Z" level=info msg="containerd successfully booted in 0.022287s"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.455600290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.476104344Z" level=info msg="Loading containers: start."
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.568941234Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.144331314Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.199597389Z" level=info msg="Loading containers: done."
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210939061Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210976128Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210994749Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.211089971Z" level=info msg="Daemon has completed initialization"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231136019Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 19:36:39 ha-224000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231344731Z" level=info msg="API listen on [::]:2376"
	Dec 13 19:36:40 ha-224000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.277223387Z" level=info msg="Processing signal 'terminated'"
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278137307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278251358Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278340377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278256739Z" level=info msg="Daemon shutdown complete"
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:41 ha-224000-m04 dockerd[1113]: time="2024-12-13T19:36:41.322763293Z" level=info msg="Starting up"
	Dec 13 19:37:41 ha-224000-m04 dockerd[1113]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 13 19:36:37 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427068027Z" level=info msg="Starting up"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427760840Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.428340753Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.446225003Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461418150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461538159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461607016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461644040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461775643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461826393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461966604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462007624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462040126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462069720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462182838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462429601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464011795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464067757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464257837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464302280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464410649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464463860Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465390367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465443699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465555213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465597957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465634744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465705067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465941498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466071120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466113283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466145023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466176156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466211240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466250495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466285590Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466317193Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466347259Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466376937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466407325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466446395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466488362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466530329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466566314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466607503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466641823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466672212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466702609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466732812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466764575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466794248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466823748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466854140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466886668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466935305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466981167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467011716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467066705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467101883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467131499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467160087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467188157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467216598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467244211Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467402488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467606858Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467674178Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467711081Z" level=info msg="containerd successfully booted in 0.022287s"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.455600290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.476104344Z" level=info msg="Loading containers: start."
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.568941234Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.144331314Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.199597389Z" level=info msg="Loading containers: done."
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210939061Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210976128Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210994749Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.211089971Z" level=info msg="Daemon has completed initialization"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231136019Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 19:36:39 ha-224000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231344731Z" level=info msg="API listen on [::]:2376"
	Dec 13 19:36:40 ha-224000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.277223387Z" level=info msg="Processing signal 'terminated'"
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278137307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278251358Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278340377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278256739Z" level=info msg="Daemon shutdown complete"
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:41 ha-224000-m04 dockerd[1113]: time="2024-12-13T19:36:41.322763293Z" level=info msg="Starting up"
	Dec 13 19:37:41 ha-224000-m04 dockerd[1113]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 11:37:41.225503    5233 out.go:270] * 
	* 
	W1213 11:37:41.226123    5233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:37:41.267588    5233 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-224000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-224000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-224000 -n ha-224000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 logs -n 25: (3.410639604s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m02:/home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m04 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp testdata/cp-test.txt                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000:/home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000 sudo cat                                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m02:/home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03:/home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m03 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-224000 node stop m02 -v=7                                                                                                 | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-224000 node start m02 -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000 -v=7                                                                                                       | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-224000 -v=7                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:33 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-224000 --wait=true -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:33 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:37 PST |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 11:33:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:33:23.556546    5233 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:33:23.556761    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556766    5233 out.go:358] Setting ErrFile to fd 2...
	I1213 11:33:23.556770    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556939    5233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:33:23.558493    5233 out.go:352] Setting JSON to false
	I1213 11:33:23.588845    5233 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1973,"bootTime":1734116430,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:33:23.588936    5233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:33:23.610818    5233 out.go:177] * [ha-224000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:33:23.652607    5233 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:33:23.652667    5233 notify.go:220] Checking for updates...
	I1213 11:33:23.695155    5233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:23.716580    5233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:33:23.758076    5233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:33:23.778447    5233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:33:23.799542    5233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:33:23.821105    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:23.821299    5233 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:33:23.821877    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.821927    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:23.834367    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51814
	I1213 11:33:23.834740    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:23.835143    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:23.835152    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:23.835371    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:23.835545    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:23.867473    5233 out.go:177] * Using the hyperkit driver based on existing profile
	I1213 11:33:23.909252    5233 start.go:297] selected driver: hyperkit
	I1213 11:33:23.909282    5233 start.go:901] validating driver "hyperkit" against &{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.909534    5233 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:33:23.909725    5233 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.909981    5233 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:33:23.922579    5233 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:33:23.929434    5233 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.929452    5233 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:33:23.935885    5233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:33:23.935924    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:23.935972    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:23.936044    5233 start.go:340] cluster config:
	{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.936181    5233 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.978382    5233 out.go:177] * Starting "ha-224000" primary control-plane node in "ha-224000" cluster
	I1213 11:33:23.999338    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:23.999406    5233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 11:33:23.999429    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:23.999602    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:23.999621    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:23.999813    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.000837    5233 start.go:360] acquireMachinesLock for ha-224000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:24.000950    5233 start.go:364] duration metric: took 87.843µs to acquireMachinesLock for "ha-224000"
	I1213 11:33:24.000984    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:24.001006    5233 fix.go:54] fixHost starting: 
	I1213 11:33:24.001462    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:24.001491    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:24.013395    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I1213 11:33:24.013731    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:24.014113    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:24.014132    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:24.014335    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:24.014453    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.014563    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:33:24.014649    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.014739    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 4112
	I1213 11:33:24.015879    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.015946    5233 fix.go:112] recreateIfNeeded on ha-224000: state=Stopped err=<nil>
	I1213 11:33:24.015971    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	W1213 11:33:24.016061    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:24.037410    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000" ...
	I1213 11:33:24.058353    5233 main.go:141] libmachine: (ha-224000) Calling .Start
	I1213 11:33:24.058516    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.058530    5233 main.go:141] libmachine: (ha-224000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid
	I1213 11:33:24.059997    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.060006    5233 main.go:141] libmachine: (ha-224000) DBG | pid 4112 is in state "Stopped"
	I1213 11:33:24.060020    5233 main.go:141] libmachine: (ha-224000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid...
	I1213 11:33:24.060148    5233 main.go:141] libmachine: (ha-224000) DBG | Using UUID b2cf51fb-709d-45fe-a947-282a845e5503
	I1213 11:33:24.195839    5233 main.go:141] libmachine: (ha-224000) DBG | Generated MAC e2:1f:26:f2:db:4d
	I1213 11:33:24.195876    5233 main.go:141] libmachine: (ha-224000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:24.196013    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196037    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196083    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b2cf51fb-709d-45fe-a947-282a845e5503", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:24.196130    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b2cf51fb-709d-45fe-a947-282a845e5503 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:24.196149    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:24.198377    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Pid is 5248
	I1213 11:33:24.198751    5233 main.go:141] libmachine: (ha-224000) DBG | Attempt 0
	I1213 11:33:24.198766    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.198839    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:33:24.200071    5233 main.go:141] libmachine: (ha-224000) DBG | Searching for e2:1f:26:f2:db:4d in /var/db/dhcpd_leases ...
	I1213 11:33:24.200197    5233 main.go:141] libmachine: (ha-224000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:24.200237    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:24.200259    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:24.200275    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:33:24.200287    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9849}
	I1213 11:33:24.200302    5233 main.go:141] libmachine: (ha-224000) DBG | Found match: e2:1f:26:f2:db:4d
	I1213 11:33:24.200309    5233 main.go:141] libmachine: (ha-224000) DBG | IP: 192.169.0.6
	I1213 11:33:24.200346    5233 main.go:141] libmachine: (ha-224000) Calling .GetConfigRaw
	I1213 11:33:24.201046    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:24.201273    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.201998    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:24.202010    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.202152    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:24.202253    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:24.202345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202460    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202575    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:24.202734    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:24.202918    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:24.202926    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:24.209830    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:24.275074    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:24.275977    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.275998    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.276018    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.276028    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.664445    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:24.664462    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:24.779029    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.779050    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.779061    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.779087    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.779925    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:24.779935    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:30.509300    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:30.509378    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:30.509389    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:30.535654    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:33:35.263286    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:33:35.263305    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263484    5233 buildroot.go:166] provisioning hostname "ha-224000"
	I1213 11:33:35.263495    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263594    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.263690    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.263795    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263879    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263974    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.264111    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.264249    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.264257    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000 && echo "ha-224000" | sudo tee /etc/hostname
	I1213 11:33:35.330220    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000
	
	I1213 11:33:35.330242    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.330385    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.330487    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330579    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330683    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.330825    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.330962    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.330973    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:33:35.395347    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:33:35.395367    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:33:35.395380    5233 buildroot.go:174] setting up certificates
	I1213 11:33:35.395390    5233 provision.go:84] configureAuth start
	I1213 11:33:35.395396    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.395536    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:35.395626    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.395729    5233 provision.go:143] copyHostCerts
	I1213 11:33:35.395759    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395813    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:33:35.395824    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395941    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:33:35.396166    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396198    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:33:35.396203    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396305    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:33:35.396479    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396511    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:33:35.396516    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396585    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:33:35.396750    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000 san=[127.0.0.1 192.169.0.6 ha-224000 localhost minikube]
	I1213 11:33:35.608012    5233 provision.go:177] copyRemoteCerts
	I1213 11:33:35.608088    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:33:35.608110    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.608273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.608376    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.608484    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.608616    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:35.643782    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:33:35.643849    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:33:35.663504    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:33:35.663563    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 11:33:35.683076    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:33:35.683137    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:33:35.702561    5233 provision.go:87] duration metric: took 307.16247ms to configureAuth
	I1213 11:33:35.702573    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:33:35.702742    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:35.702756    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:35.702886    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.702984    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.703073    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703154    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703252    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.703383    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.703507    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.703514    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:33:35.761527    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:33:35.761539    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:33:35.761614    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:33:35.761631    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.761761    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.761867    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.761952    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.762029    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.762180    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.762322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.762369    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:33:35.829448    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:33:35.829473    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.829611    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.829710    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829804    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829882    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.830037    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.830180    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.830192    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:33:37.506714    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:33:37.506731    5233 machine.go:96] duration metric: took 13.304830015s to provisionDockerMachine
	I1213 11:33:37.506744    5233 start.go:293] postStartSetup for "ha-224000" (driver="hyperkit")
	I1213 11:33:37.506752    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:33:37.506763    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.506964    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:33:37.506981    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.507084    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.507184    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.507273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.507359    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.549053    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:33:37.553822    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:33:37.553837    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:33:37.553928    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:33:37.554104    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:33:37.554111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:33:37.554283    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:33:37.567654    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:37.594179    5233 start.go:296] duration metric: took 87.426295ms for postStartSetup
	I1213 11:33:37.594207    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.594408    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:33:37.594421    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.594508    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.594590    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.594724    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.594816    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.628799    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:33:37.628871    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:33:37.659933    5233 fix.go:56] duration metric: took 13.659041433s for fixHost
	I1213 11:33:37.659954    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.660095    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.660190    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660283    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660359    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.660499    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:37.660647    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:37.660654    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:33:37.718237    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118417.855687365
	
	I1213 11:33:37.718250    5233 fix.go:216] guest clock: 1734118417.855687365
	I1213 11:33:37.718256    5233 fix.go:229] Guest: 2024-12-13 11:33:37.855687365 -0800 PST Remote: 2024-12-13 11:33:37.659944 -0800 PST m=+14.144143612 (delta=195.743365ms)
	I1213 11:33:37.718279    5233 fix.go:200] guest clock delta is within tolerance: 195.743365ms
	I1213 11:33:37.718284    5233 start.go:83] releasing machines lock for "ha-224000", held for 13.717432141s
	I1213 11:33:37.718302    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718458    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:37.718557    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718855    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718959    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.719072    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:33:37.719100    5233 ssh_runner.go:195] Run: cat /version.json
	I1213 11:33:37.719104    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719118    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719221    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719232    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719360    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719454    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719480    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719588    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.719609    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.801992    5233 ssh_runner.go:195] Run: systemctl --version
	I1213 11:33:37.807211    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:33:37.811454    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:33:37.811510    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:33:37.823724    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:33:37.823735    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:37.823838    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:37.842317    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:33:37.851247    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:33:37.859919    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:33:37.859977    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:33:37.868699    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.877385    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:33:37.885895    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.894631    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:33:37.903433    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:33:37.912080    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:33:37.920838    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:33:37.929686    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:33:37.937526    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:33:37.937575    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:33:37.946343    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:33:37.954321    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.055814    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:33:38.074538    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:38.074638    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:33:38.087031    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.101085    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:33:38.116013    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.126951    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.137488    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:33:38.158482    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.168678    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:38.183844    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:33:38.186730    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:33:38.193926    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:33:38.207186    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:33:38.306381    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:33:38.409182    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:33:38.409284    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:33:38.423485    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.520298    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:33:40.856468    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336161165s)
	I1213 11:33:40.856560    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:33:40.867785    5233 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 11:33:40.881291    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:40.891767    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:33:40.985833    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:33:41.094364    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.203166    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:33:41.217499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:41.228676    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.322265    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:33:41.392321    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:33:41.392423    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:33:41.396866    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:33:41.396929    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:33:41.400110    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:33:41.428478    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:33:41.428562    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.446343    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.486067    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:33:41.486118    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:41.486570    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:33:41.490428    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.500921    5233 kubeadm.go:883] updating cluster {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:33:41.501009    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:41.501080    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.514302    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.514313    5233 docker.go:619] Images already preloaded, skipping extraction
	I1213 11:33:41.514404    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.528088    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.528111    5233 cache_images.go:84] Images are preloaded, skipping loading
	I1213 11:33:41.528123    5233 kubeadm.go:934] updating node { 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1213 11:33:41.528195    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:33:41.528276    5233 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 11:33:41.563286    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:41.563301    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:41.563314    5233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 11:33:41.563331    5233 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.6 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-224000 NodeName:ha-224000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:33:41.563411    5233 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-224000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.6"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.6"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:33:41.563429    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:33:41.563502    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:33:41.577356    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:33:41.577431    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:33:41.577503    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:33:41.586076    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:33:41.586130    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 11:33:41.593693    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1213 11:33:41.607111    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:33:41.620717    5233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1213 11:33:41.634595    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:33:41.648138    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:33:41.651088    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.660611    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.764209    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:33:41.776920    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.6
	I1213 11:33:41.776935    5233 certs.go:194] generating shared ca certs ...
	I1213 11:33:41.776947    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.777111    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:33:41.777172    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:33:41.777182    5233 certs.go:256] generating profile certs ...
	I1213 11:33:41.777268    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:33:41.777289    5233 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848
	I1213 11:33:41.777307    5233 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.6 192.169.0.7 192.169.0.8 192.169.0.254]
	I1213 11:33:41.924008    5233 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 ...
	I1213 11:33:41.924024    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848: {Name:mk14c8bdd605a32a15c7e818d08d02d64b9be917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925000    5233 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 ...
	I1213 11:33:41.925011    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848: {Name:mk0673ccf9e28132db2b00d320fea4d73482d286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925290    5233 certs.go:381] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt
	I1213 11:33:41.925479    5233 certs.go:385] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key
	I1213 11:33:41.925688    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:33:41.925697    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:33:41.925721    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:33:41.925741    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:33:41.925761    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:33:41.925780    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:33:41.925802    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:33:41.925823    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:33:41.925841    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:33:41.925928    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:33:41.925965    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:33:41.925979    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:33:41.926013    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:33:41.926042    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:33:41.926077    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:33:41.926146    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:41.926184    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:33:41.926207    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:41.926225    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:33:41.927710    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:33:41.951166    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:33:41.975929    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:33:42.015520    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:33:42.051250    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:33:42.097395    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:33:42.139215    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:33:42.167922    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:33:42.188284    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:33:42.207671    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:33:42.226762    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:33:42.245781    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:33:42.259332    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:33:42.263629    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:33:42.272753    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276074    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276126    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.280400    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:33:42.289318    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:33:42.298635    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301936    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301986    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.306272    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:33:42.315219    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:33:42.324178    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327536    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327583    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.331821    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:33:42.340849    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:33:42.344177    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:33:42.348774    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:33:42.353021    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:33:42.357742    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:33:42.361999    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:33:42.366226    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:33:42.370715    5233 kubeadm.go:392] StartCluster: {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:42.370839    5233 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 11:33:42.382402    5233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:33:42.390619    5233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 11:33:42.390630    5233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 11:33:42.390688    5233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:33:42.399169    5233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:42.399486    5233 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-224000" does not appear in /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.399573    5233 kubeconfig.go:62] /Users/jenkins/minikube-integration/20090-800/kubeconfig needs updating (will repair): [kubeconfig missing "ha-224000" cluster setting kubeconfig missing "ha-224000" context setting]
	I1213 11:33:42.399754    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.400139    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.400368    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:33:42.400704    5233 cert_rotation.go:140] Starting client certificate rotation controller
	I1213 11:33:42.400887    5233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:33:42.408731    5233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.6
	I1213 11:33:42.408748    5233 kubeadm.go:597] duration metric: took 18.113581ms to restartPrimaryControlPlane
	I1213 11:33:42.408754    5233 kubeadm.go:394] duration metric: took 38.045507ms to StartCluster
	I1213 11:33:42.408764    5233 settings.go:142] acquiring lock: {Name:mk0626482d1a77203bd9c1b6d841b6780f4771c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.408852    5233 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.409247    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.409470    5233 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:33:42.409483    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:33:42.409500    5233 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:33:42.409614    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.452999    5233 out.go:177] * Enabled addons: 
	I1213 11:33:42.473889    5233 addons.go:510] duration metric: took 64.391249ms for enable addons: enabled=[]
	I1213 11:33:42.473995    5233 start.go:246] waiting for cluster config update ...
	I1213 11:33:42.474008    5233 start.go:255] writing updated cluster config ...
	I1213 11:33:42.496132    5233 out.go:201] 
	I1213 11:33:42.517570    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.517711    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.541038    5233 out.go:177] * Starting "ha-224000-m02" control-plane node in "ha-224000" cluster
	I1213 11:33:42.583131    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:42.583188    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:42.583372    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:42.583392    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:42.583516    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.584724    5233 start.go:360] acquireMachinesLock for ha-224000-m02: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:42.584832    5233 start.go:364] duration metric: took 83.288µs to acquireMachinesLock for "ha-224000-m02"
	I1213 11:33:42.584859    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:42.584868    5233 fix.go:54] fixHost starting: m02
	I1213 11:33:42.585263    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:42.585289    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:42.597490    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51838
	I1213 11:33:42.598009    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:42.598520    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:42.598537    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:42.598854    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:42.598984    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.599156    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:33:42.599250    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.599342    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5143
	I1213 11:33:42.600521    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.600553    5233 fix.go:112] recreateIfNeeded on ha-224000-m02: state=Stopped err=<nil>
	I1213 11:33:42.600561    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	W1213 11:33:42.600657    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:42.642952    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m02" ...
	I1213 11:33:42.664177    5233 main.go:141] libmachine: (ha-224000-m02) Calling .Start
	I1213 11:33:42.664494    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.664558    5233 main.go:141] libmachine: (ha-224000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid
	I1213 11:33:42.666694    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.666707    5233 main.go:141] libmachine: (ha-224000-m02) DBG | pid 5143 is in state "Stopped"
	I1213 11:33:42.666723    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid...
	I1213 11:33:42.667115    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Using UUID 573e64b1-a821-4bce-aba3-b379863bb495
	I1213 11:33:42.694947    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Generated MAC fa:54:eb:53:13:e6
	I1213 11:33:42.695001    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:42.695241    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695304    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695353    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "573e64b1-a821-4bce-aba3-b379863bb495", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:42.695424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 573e64b1-a821-4bce-aba3-b379863bb495 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:42.695442    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:42.697074    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Pid is 5263
	I1213 11:33:42.697519    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Attempt 0
	I1213 11:33:42.697548    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.697612    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:33:42.699596    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Searching for fa:54:eb:53:13:e6 in /var/db/dhcpd_leases ...
	I1213 11:33:42.699713    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:42.699733    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:33:42.699753    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:42.699767    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:42.699789    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found match: fa:54:eb:53:13:e6
	I1213 11:33:42.699807    5233 main.go:141] libmachine: (ha-224000-m02) DBG | IP: 192.169.0.7
	I1213 11:33:42.699845    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetConfigRaw
	I1213 11:33:42.700566    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:33:42.700747    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.701233    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:42.701243    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.701360    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:33:42.701474    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:33:42.701583    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701690    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701786    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:33:42.701932    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:42.702072    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:33:42.702079    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:42.708424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:42.717944    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:42.718853    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:42.718881    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:42.718896    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:42.718909    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.109099    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:43.109114    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:43.223848    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:43.223866    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:43.223877    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:43.223884    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.224755    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:43.224765    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:48.997042    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:48.997098    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:48.997108    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:49.020830    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:34:17.779287    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:34:17.779302    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779433    5233 buildroot.go:166] provisioning hostname "ha-224000-m02"
	I1213 11:34:17.779441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779556    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.779664    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.779746    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779835    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.780083    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.780222    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.780230    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m02 && echo "ha-224000-m02" | sudo tee /etc/hostname
	I1213 11:34:17.853511    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m02
	
	I1213 11:34:17.853529    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.853672    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.853764    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853853    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853936    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.854073    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.854254    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.854268    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:34:17.919686    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:34:17.919701    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:34:17.919711    5233 buildroot.go:174] setting up certificates
	I1213 11:34:17.919720    5233 provision.go:84] configureAuth start
	I1213 11:34:17.919727    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.919878    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:17.919996    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.920105    5233 provision.go:143] copyHostCerts
	I1213 11:34:17.920136    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920185    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:34:17.920199    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920354    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:34:17.920585    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920616    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:34:17.920621    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920688    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:34:17.920873    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920909    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:34:17.920914    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920981    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:34:17.921606    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m02 san=[127.0.0.1 192.169.0.7 ha-224000-m02 localhost minikube]
	I1213 11:34:18.018851    5233 provision.go:177] copyRemoteCerts
	I1213 11:34:18.018930    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:34:18.018950    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.019110    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.019222    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.019333    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.019447    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:18.056757    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:34:18.056824    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:34:18.076340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:34:18.076402    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:34:18.095849    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:34:18.095918    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:34:18.115722    5233 provision.go:87] duration metric: took 195.866505ms to configureAuth
	I1213 11:34:18.115736    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:34:18.115914    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:18.115934    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:18.116067    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.116155    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.116267    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116362    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116456    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.116584    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.116708    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.116716    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:34:18.177000    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:34:18.177013    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:34:18.177102    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:34:18.177115    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.177250    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.177339    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177434    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177521    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.177668    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.177802    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.177848    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:34:18.247535    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:34:18.247560    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.247701    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.247799    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247889    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247972    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.248144    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.248281    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.248294    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:34:19.945302    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:34:19.945316    5233 machine.go:96] duration metric: took 37.234619508s to provisionDockerMachine
	I1213 11:34:19.945325    5233 start.go:293] postStartSetup for "ha-224000-m02" (driver="hyperkit")
	I1213 11:34:19.945338    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:34:19.945348    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:19.945560    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:34:19.945574    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:19.945673    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:19.945782    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:19.945867    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:19.945970    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:19.983485    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:34:19.986722    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:34:19.986734    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:34:19.986812    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:34:19.986953    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:34:19.986959    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:34:19.987126    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:34:19.994240    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:20.014210    5233 start.go:296] duration metric: took 68.83207ms for postStartSetup
	I1213 11:34:20.014230    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.014422    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:34:20.014435    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.014537    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.014623    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.014704    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.014788    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.051647    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:34:20.051721    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:34:20.083772    5233 fix.go:56] duration metric: took 37.489367071s for fixHost
	I1213 11:34:20.083797    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.083942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.084018    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084114    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084207    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.084348    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:20.084490    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:20.084497    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:34:20.144388    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118460.015290153
	
	I1213 11:34:20.144404    5233 fix.go:216] guest clock: 1734118460.015290153
	I1213 11:34:20.144410    5233 fix.go:229] Guest: 2024-12-13 11:34:20.015290153 -0800 PST Remote: 2024-12-13 11:34:20.083787 -0800 PST m=+56.558492323 (delta=-68.496847ms)
	I1213 11:34:20.144420    5233 fix.go:200] guest clock delta is within tolerance: -68.496847ms
	I1213 11:34:20.144423    5233 start.go:83] releasing machines lock for "ha-224000-m02", held for 37.550011232s
	I1213 11:34:20.144441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.144584    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:20.167177    5233 out.go:177] * Found network options:
	I1213 11:34:20.188040    5233 out.go:177]   - NO_PROXY=192.169.0.6
	W1213 11:34:20.210009    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.210052    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.210927    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211209    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211385    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:34:20.211422    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	W1213 11:34:20.211452    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.211589    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:34:20.211610    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.211651    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211865    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211907    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212101    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212120    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212285    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212303    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.212458    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	W1213 11:34:20.245031    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:34:20.245108    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:34:20.305744    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:34:20.305779    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.305887    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.321917    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:34:20.330318    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:34:20.338449    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.338512    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:34:20.346961    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.355388    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:34:20.363629    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.371829    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:34:20.380410    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:34:20.388794    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:34:20.397231    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:34:20.405722    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:34:20.413168    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:34:20.413221    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:34:20.421725    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:34:20.429719    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.529241    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:34:20.543578    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.543670    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:34:20.554987    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.567690    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:34:20.581251    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.592466    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.603581    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:34:20.625283    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.635539    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.650656    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:34:20.653582    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:34:20.660675    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:34:20.674213    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:34:20.766147    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:34:20.880974    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.880996    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:34:20.895110    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.996896    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:34:23.324011    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325927019s)
	I1213 11:34:23.324083    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:34:23.334876    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.345278    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:34:23.440468    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:34:23.550842    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.658765    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:34:23.672210    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.683300    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.776286    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:34:23.841785    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:34:23.841892    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:34:23.847288    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:34:23.847368    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:34:23.850479    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:34:23.877340    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:34:23.877457    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.894304    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.933199    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:34:23.953827    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:34:23.975731    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:23.976228    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:34:23.980868    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:23.990424    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:34:23.990607    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:23.990844    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:23.990865    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.002451    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51860
	I1213 11:34:24.002790    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.003114    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.003125    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.003331    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.003469    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:34:24.003590    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:24.003653    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:34:24.004855    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:34:24.005135    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:24.005159    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.016676    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51862
	I1213 11:34:24.017013    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.017327    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.017339    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.017581    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.017710    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:34:24.017828    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.7
	I1213 11:34:24.017838    5233 certs.go:194] generating shared ca certs ...
	I1213 11:34:24.017849    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:34:24.017995    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:34:24.018055    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:34:24.018064    5233 certs.go:256] generating profile certs ...
	I1213 11:34:24.018159    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:34:24.018227    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.d29f1a5b
	I1213 11:34:24.018283    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:34:24.018291    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:34:24.018312    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:34:24.018338    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:34:24.018360    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:34:24.018382    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:34:24.018401    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:34:24.018420    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:34:24.018438    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:34:24.018527    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:34:24.018569    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:34:24.018578    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:34:24.018614    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:34:24.018649    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:34:24.018679    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:34:24.018787    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:24.018831    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.018854    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.018872    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.018902    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:34:24.018999    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:34:24.019091    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:34:24.019182    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:34:24.019261    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:34:24.046997    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:34:24.050721    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:34:24.059570    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:34:24.062693    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:34:24.071272    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:34:24.074372    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:34:24.083223    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:34:24.086307    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:34:24.095588    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:34:24.098711    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:34:24.107784    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:34:24.110902    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:34:24.120480    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:34:24.141070    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:34:24.160878    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:34:24.180920    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:34:24.200790    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:34:24.220908    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:34:24.240966    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:34:24.260343    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:34:24.279661    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:34:24.298866    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:34:24.318211    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:34:24.337602    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:34:24.351230    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:34:24.364930    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:34:24.378548    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:34:24.392045    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:34:24.405741    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:34:24.419366    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:34:24.433162    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:34:24.437460    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:34:24.446555    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449893    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449949    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.454195    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:34:24.463315    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:34:24.472398    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475806    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475869    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.480014    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:34:24.488936    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:34:24.498028    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501370    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501420    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.505749    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:34:24.514801    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:34:24.518173    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:34:24.522615    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:34:24.526939    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:34:24.531281    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:34:24.535563    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:34:24.539842    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:34:24.544160    5233 kubeadm.go:934] updating node {m02 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1213 11:34:24.544222    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:34:24.544239    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:34:24.544284    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:34:24.557092    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:34:24.557131    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:34:24.557204    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:34:24.566007    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:34:24.566093    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:34:24.575831    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:34:24.589369    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:34:24.603027    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:34:24.616380    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:34:24.619250    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:24.628866    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.726853    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.741435    5233 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:34:24.741619    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:24.762788    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:34:24.783602    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.924600    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.940595    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:34:24.940795    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:34:24.940831    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:34:24.940998    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:24.941077    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:24.941083    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:24.941090    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:24.941095    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:25.941784    5233 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I1213 11:34:25.941996    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:25.942010    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:25.942024    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:25.942031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:26.943551    5233 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I1213 11:34:26.943636    5233 node_ready.go:53] error getting node "ha-224000-m02": Get "https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02": dial tcp 192.169.0.6:8443: connect: connection refused
	I1213 11:34:26.943705    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:26.943715    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:26.943726    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:26.943733    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.736951    5233 round_trippers.go:574] Response Status: 200 OK in 6791 milliseconds
	I1213 11:34:33.738522    5233 node_ready.go:49] node "ha-224000-m02" has status "Ready":"True"
	I1213 11:34:33.738535    5233 node_ready.go:38] duration metric: took 8.794739664s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:33.738543    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:33.738582    5233 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:34:33.738592    5233 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:34:33.738642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:33.738649    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.738656    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.738661    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.750539    5233 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1213 11:34:33.759150    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.759215    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:34:33.759222    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.759229    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.759233    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.789285    5233 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1213 11:34:33.789752    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.789760    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.789766    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.789770    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799141    5233 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1213 11:34:33.799424    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.799433    5233 pod_ready.go:82] duration metric: took 40.258328ms for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799440    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799505    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:34:33.799511    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.799516    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799520    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.807914    5233 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1213 11:34:33.808397    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.808404    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.808415    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.808419    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.813376    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.813909    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.813919    5233 pod_ready.go:82] duration metric: took 14.470417ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813926    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813967    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:34:33.813972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.813978    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.813982    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.817802    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.818281    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.818288    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.818294    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.818299    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.823207    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.823485    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.823495    5233 pod_ready.go:82] duration metric: took 9.562079ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823503    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823545    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:34:33.823551    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.823557    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.823561    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.827781    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.828190    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:33.828197    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.828204    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.828207    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.831785    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.832141    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.832151    5233 pod_ready.go:82] duration metric: took 8.641657ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832159    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832202    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:34:33.832207    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.832213    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.832219    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.836265    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.939780    5233 request.go:632] Waited for 102.859328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939849    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939857    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.939865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.939871    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.946873    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:34:33.947618    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.947630    5233 pod_ready.go:82] duration metric: took 115.439259ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.947652    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.138902    5233 request.go:632] Waited for 191.1655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138938    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138982    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.138990    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.138993    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.142609    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.339564    5233 request.go:632] Waited for 196.386923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339652    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.339688    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.339702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.342232    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:34.342592    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.342602    5233 pod_ready.go:82] duration metric: took 394.853592ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.342609    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.540215    5233 request.go:632] Waited for 197.501487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540359    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540371    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.540384    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.540391    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.544062    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.740387    5233 request.go:632] Waited for 195.768993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740457    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740463    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.740470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.740474    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.742464    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:34.742759    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.742770    5233 pod_ready.go:82] duration metric: took 400.065678ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.742777    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.940360    5233 request.go:632] Waited for 197.497147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940426    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940432    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.940438    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.940442    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.942974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.139848    5233 request.go:632] Waited for 196.049551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139909    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139915    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.139922    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.139927    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.142601    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.143154    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.143165    5233 pod_ready.go:82] duration metric: took 400.297853ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.143173    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.340241    5233 request.go:632] Waited for 196.968883ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340288    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340294    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.340301    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.340305    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.344403    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:35.539580    5233 request.go:632] Waited for 194.599751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539614    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539618    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.539625    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.539628    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.541865    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.542227    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.542236    5233 pod_ready.go:82] duration metric: took 398.973916ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.542244    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.739398    5233 request.go:632] Waited for 197.024136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739550    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739562    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.739574    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.739585    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.743222    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:35.939505    5233 request.go:632] Waited for 195.770633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939554    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939560    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.939566    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.939572    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.941922    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:36.140471    5233 request.go:632] Waited for 97.089364ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140522    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140532    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.140544    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.140552    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.143672    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.339675    5233 request.go:632] Waited for 195.459387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339785    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339799    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.339811    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.339818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.344343    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:36.543195    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.543214    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.543223    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.543228    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.546614    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.740875    5233 request.go:632] Waited for 193.633171ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740939    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740951    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.740963    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.740974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.745536    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:37.043269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.043284    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.043293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.043297    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.046460    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.139384    5233 request.go:632] Waited for 92.520369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139445    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139451    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.139457    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.139461    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.141508    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.544411    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.544439    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.544458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.544464    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.548035    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.548715    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.548726    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.548734    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.548740    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.551007    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.551414    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:38.043335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.043360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.043371    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.043377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.046826    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:38.047379    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.047390    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.047397    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.047402    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.049403    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:38.543656    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.543682    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.543702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.543709    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546343    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:38.546787    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.546797    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.546803    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546807    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.548405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.043375    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.043397    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.043405    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.043409    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046060    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.046784    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.046792    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.046798    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046801    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.048453    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.543079    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.543094    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.543100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.543103    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.545426    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.545991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.545999    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.546005    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.546008    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.548059    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:40.044134    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.044192    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.044205    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.044212    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048181    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:40.048585    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.048594    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.048600    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048603    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.050402    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:40.050801    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:40.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.543772    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.543785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.543818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.547875    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:40.548358    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.548366    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.548372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.548375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.550043    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.043443    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.043501    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.043516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.043523    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.047137    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.047586    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.047593    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.047598    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.047602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.049298    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.544147    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.544170    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.544182    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.544190    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548033    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.548573    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.548581    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.548587    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548592    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.550267    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.044241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.044256    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.044264    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.044268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.046885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.047355    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.047363    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.047369    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.047373    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.049099    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.543762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.543771    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.543776    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546146    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.546521    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.546529    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.546535    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546538    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.548300    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.548618    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:43.043836    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.043862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.043875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.043884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.047393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:43.048068    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.048075    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.048082    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.048085    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.049985    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:43.544065    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.544086    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.544097    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.544117    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.547029    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:43.547638    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.547645    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.547651    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.547657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.549301    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.044961    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.044988    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.045023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.045031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.048485    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.049062    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.049070    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.049076    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.049081    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.050740    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.545903    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.545928    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.545945    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.545956    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.549955    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.550463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.550470    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.550476    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.550479    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.552158    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.552451    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:45.045945    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.045972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.045984    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.045991    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.049387    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:45.050098    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.050109    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.050117    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.050123    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.051738    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:45.544140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.544159    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.544168    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.544172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.546873    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:45.547352    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.547360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.547366    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.547370    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.548773    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.043998    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.044020    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.044032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.044038    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047292    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.047783    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.047790    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.047795    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047798    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.049310    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.544571    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.544597    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.544609    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.544616    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.548134    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.548745    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.548755    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.548762    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.548771    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.550544    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.044994    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.045015    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.045026    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.045032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.048476    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.049178    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.049189    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.049197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.049202    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.050811    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.051136    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:47.545774    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.545796    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.545809    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.545816    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.549567    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.550282    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.550292    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.550308    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.550313    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.552150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.044237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.044252    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.044262    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.044267    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.046593    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:48.047034    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.047041    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.047047    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.047051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.048719    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.544694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.544762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.544781    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.544788    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548156    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:48.548805    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.548813    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.548819    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548830    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.550405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.045819    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.045842    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.045854    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.045864    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.049109    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.049810    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.049821    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.049828    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.049834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.051675    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.052058    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:49.546343    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.546370    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.546384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.546391    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550058    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.550673    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.550684    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.550692    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550697    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.552559    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.044335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.044361    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.044373    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.044380    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.048285    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.048872    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.048879    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.048885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.048889    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.050497    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.544806    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.544862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.544875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.544885    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.548751    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.549398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.549406    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.549412    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.549416    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.550966    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.551275    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.551284    5233 pod_ready.go:82] duration metric: took 15.007121321s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551291    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551328    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:34:50.551333    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.551338    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.551343    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.553068    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.553502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.553509    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.553514    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.553517    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.555304    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.555632    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.555640    5233 pod_ready.go:82] duration metric: took 4.343987ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555647    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555686    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:34:50.555691    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.555696    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.555699    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.557601    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.557970    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:34:50.557977    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.557983    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.557986    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.559417    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.559883    5233 pod_ready.go:93] pod "kube-proxy-7b8ch" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.559891    5233 pod_ready.go:82] duration metric: took 4.238545ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559899    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559932    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:34:50.559949    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.559956    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.559960    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.562004    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:50.562348    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:50.562356    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.562361    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.562365    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.563914    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.564222    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.564231    5233 pod_ready.go:82] duration metric: took 4.326466ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564237    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:34:50.564274    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.564280    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.564293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.565929    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.566322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.566328    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.566334    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.566337    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.567867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.568197    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.568208    5233 pod_ready.go:82] duration metric: took 3.96239ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.568215    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.745519    5233 request.go:632] Waited for 177.216442ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745569    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745584    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.745599    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.745607    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.748965    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.946816    5233 request.go:632] Waited for 197.362494ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946935    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946944    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.946958    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.946964    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.950494    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.950832    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.950846    5233 pod_ready.go:82] duration metric: took 382.598257ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.950855    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.146433    5233 request.go:632] Waited for 195.515852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146519    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146528    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.146539    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.146545    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.150256    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.346180    5233 request.go:632] Waited for 195.336158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346304    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346314    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.346325    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.346333    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.350059    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.350701    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.350714    5233 pod_ready.go:82] duration metric: took 399.82535ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.350723    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.546175    5233 request.go:632] Waited for 195.389456ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546301    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546322    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.546341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.546357    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.549469    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.745754    5233 request.go:632] Waited for 195.890122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745865    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745871    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.745877    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.745881    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.747825    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:51.748179    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.748191    5233 pod_ready.go:82] duration metric: took 397.435321ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.748198    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.945402    5233 request.go:632] Waited for 197.127949ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945442    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945447    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.945453    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.945457    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.948002    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:52.146346    5233 request.go:632] Waited for 197.812373ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146446    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146458    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.146470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.146477    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.150176    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.150503    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:52.150514    5233 pod_ready.go:82] duration metric: took 402.286111ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:52.150525    5233 pod_ready.go:39] duration metric: took 18.409559513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:52.150552    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:34:52.150642    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.164316    5233 api_server.go:72] duration metric: took 27.417579599s to wait for apiserver process to appear ...
	I1213 11:34:52.164330    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:34:52.164347    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:34:52.168889    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:34:52.168929    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:34:52.168934    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.168946    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.168950    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.169508    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:34:52.169593    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:34:52.169605    5233 api_server.go:131] duration metric: took 5.269383ms to wait for apiserver health ...
	I1213 11:34:52.169610    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:34:52.346116    5233 request.go:632] Waited for 176.438003ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346270    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.346282    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.346288    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.351411    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:34:52.356738    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:34:52.356755    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.356759    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.356761    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.356765    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.356768    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.356771    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.356774    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.356776    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.356780    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.356782    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.356785    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.356788    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.356791    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.356793    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.356796    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.356799    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.356802    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.356804    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.356807    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.356810    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.356813    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.356815    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.356818    5233 system_pods.go:61] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.356821    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.356823    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.356826    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.356830    5233 system_pods.go:74] duration metric: took 187.204101ms to wait for pod list to return data ...
	I1213 11:34:52.356836    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:34:52.547123    5233 request.go:632] Waited for 190.17926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547175    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547184    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.547197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.547205    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.550987    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.551153    5233 default_sa.go:45] found service account: "default"
	I1213 11:34:52.551169    5233 default_sa.go:55] duration metric: took 194.315508ms for default service account to be created ...
	I1213 11:34:52.551177    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:34:52.745633    5233 request.go:632] Waited for 194.336495ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745749    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745782    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.745804    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.745815    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.750592    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:52.755864    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:34:52.755877    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.755881    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.755884    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.755887    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.755890    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.755893    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.755896    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.755899    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.755902    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.755905    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.755908    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.755911    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.755914    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.755917    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.755919    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.755923    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.755926    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.755929    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.755932    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.755935    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.755938    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.755941    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.755944    5233 system_pods.go:89] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.755946    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.755952    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.755956    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.755960    5233 system_pods.go:126] duration metric: took 204.766483ms to wait for k8s-apps to be running ...
	I1213 11:34:52.755970    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:34:52.756038    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:34:52.767749    5233 system_svc.go:56] duration metric: took 11.776634ms WaitForService to wait for kubelet
	I1213 11:34:52.767765    5233 kubeadm.go:582] duration metric: took 28.020992834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:34:52.767792    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:34:52.945101    5233 request.go:632] Waited for 177.223908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945150    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945158    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.945170    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.945176    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.949117    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.950061    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950074    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950086    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950090    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950094    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950097    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950099    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950102    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950105    5233 node_conditions.go:105] duration metric: took 182.296841ms to run NodePressure ...
	I1213 11:34:52.950114    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:34:52.950132    5233 start.go:255] writing updated cluster config ...
	I1213 11:34:52.972494    5233 out.go:201] 
	I1213 11:34:52.993694    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:52.993820    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.016586    5233 out.go:177] * Starting "ha-224000-m03" control-plane node in "ha-224000" cluster
	I1213 11:34:53.090440    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:34:53.090478    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:34:53.090696    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:34:53.090718    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:34:53.090850    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.091713    5233 start.go:360] acquireMachinesLock for ha-224000-m03: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:34:53.091822    5233 start.go:364] duration metric: took 84.906µs to acquireMachinesLock for "ha-224000-m03"
	I1213 11:34:53.091846    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:34:53.091854    5233 fix.go:54] fixHost starting: m03
	I1213 11:34:53.092290    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:53.092327    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:53.104639    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I1213 11:34:53.104960    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:53.105280    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:53.105294    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:53.105531    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:53.105628    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.105732    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetState
	I1213 11:34:53.105817    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.105891    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 4216
	I1213 11:34:53.107018    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.107070    5233 fix.go:112] recreateIfNeeded on ha-224000-m03: state=Stopped err=<nil>
	I1213 11:34:53.107090    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	W1213 11:34:53.107166    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:34:53.128583    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m03" ...
	I1213 11:34:53.170463    5233 main.go:141] libmachine: (ha-224000-m03) Calling .Start
	I1213 11:34:53.170757    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.170820    5233 main.go:141] libmachine: (ha-224000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid
	I1213 11:34:53.173341    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.173354    5233 main.go:141] libmachine: (ha-224000-m03) DBG | pid 4216 is in state "Stopped"
	I1213 11:34:53.173370    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid...
	I1213 11:34:53.173814    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Using UUID a949994f-ed60-4f04-8e19-b8e4ec0a7cc4
	I1213 11:34:53.198944    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Generated MAC a6:90:90:dd:31:4c
	I1213 11:34:53.198971    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:34:53.199150    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199192    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:34:53.199276    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a949994f-ed60-4f04-8e19-b8e4ec0a7cc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:34:53.199299    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:34:53.201829    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Pid is 5320
	I1213 11:34:53.202230    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Attempt 0
	I1213 11:34:53.202250    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.202308    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 5320
	I1213 11:34:53.203502    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Searching for a6:90:90:dd:31:4c in /var/db/dhcpd_leases ...
	I1213 11:34:53.203593    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:34:53.203623    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:34:53.203647    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:34:53.203666    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:34:53.203681    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:34:53.203694    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found match: a6:90:90:dd:31:4c
	I1213 11:34:53.203705    5233 main.go:141] libmachine: (ha-224000-m03) DBG | IP: 192.169.0.8
	I1213 11:34:53.203714    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetConfigRaw
	I1213 11:34:53.204410    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:34:53.204623    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.205075    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:34:53.205084    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.205213    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:34:53.205302    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:34:53.205398    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205497    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205650    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:34:53.205789    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:53.205928    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:34:53.205935    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:34:53.212601    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:34:53.221560    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:34:53.222531    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.222558    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.222580    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.222599    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.612220    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:34:53.612234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:34:53.727037    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.727057    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.727094    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.727117    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.727874    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:34:53.727886    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:34:59.521710    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:34:59.521832    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:34:59.521841    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:34:59.545358    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:35:28.268303    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:35:28.268318    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268453    5233 buildroot.go:166] provisioning hostname "ha-224000-m03"
	I1213 11:35:28.268464    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.268633    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.268718    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268794    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268890    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.269047    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.269192    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.269201    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m03 && echo "ha-224000-m03" | sudo tee /etc/hostname
	I1213 11:35:28.331907    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m03
	
	I1213 11:35:28.331923    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.332060    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.332169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332280    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332367    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.332526    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.332658    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.332669    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:28.389916    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:28.389931    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:35:28.389961    5233 buildroot.go:174] setting up certificates
	I1213 11:35:28.389971    5233 provision.go:84] configureAuth start
	I1213 11:35:28.389982    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.390117    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:28.390208    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.390313    5233 provision.go:143] copyHostCerts
	I1213 11:35:28.390344    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390394    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:35:28.390401    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390555    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:35:28.390787    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390820    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:35:28.390825    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390910    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:35:28.391077    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391106    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:35:28.391111    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391228    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:35:28.391418    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m03 san=[127.0.0.1 192.169.0.8 ha-224000-m03 localhost minikube]
	I1213 11:35:28.615259    5233 provision.go:177] copyRemoteCerts
	I1213 11:35:28.615322    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:28.615337    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.615483    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.615599    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.615704    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.615808    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:28.648163    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:35:28.648235    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:35:28.668111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:35:28.668178    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:28.688091    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:35:28.688163    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:28.707920    5233 provision.go:87] duration metric: took 317.933618ms to configureAuth
	I1213 11:35:28.707937    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:35:28.708107    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:28.708120    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:28.708271    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.708384    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.708472    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708567    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708672    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.708792    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.708915    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.708923    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:35:28.759762    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:35:28.759775    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:35:28.759854    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:35:28.759870    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.760005    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.760093    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760190    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760274    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.760438    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.760606    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.760655    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:35:28.823874    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:35:28.823891    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.824044    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.824161    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824266    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824376    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.824572    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.824732    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.824746    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:35:30.486456    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:35:30.486475    5233 machine.go:96] duration metric: took 37.280827239s to provisionDockerMachine
	I1213 11:35:30.486485    5233 start.go:293] postStartSetup for "ha-224000-m03" (driver="hyperkit")
	I1213 11:35:30.486499    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:30.486509    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.486716    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:30.486731    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.486828    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.486916    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.487008    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.487103    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.519400    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:30.522965    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:35:30.522976    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:35:30.523076    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:35:30.523222    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:35:30.523229    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:35:30.523407    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:30.531672    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:30.550850    5233 start.go:296] duration metric: took 64.356166ms for postStartSetup
	I1213 11:35:30.550875    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.551059    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:35:30.551072    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.551169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.551256    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.551369    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.551457    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.583546    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:35:30.583619    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:35:30.638958    5233 fix.go:56] duration metric: took 37.546530399s for fixHost
	I1213 11:35:30.638984    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.639131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.639231    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639317    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639400    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.639557    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:30.639690    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:30.639697    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:35:30.691357    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118530.813836388
	
	I1213 11:35:30.691371    5233 fix.go:216] guest clock: 1734118530.813836388
	I1213 11:35:30.691376    5233 fix.go:229] Guest: 2024-12-13 11:35:30.813836388 -0800 PST Remote: 2024-12-13 11:35:30.638973 -0800 PST m=+127.105464891 (delta=174.863388ms)
	I1213 11:35:30.691387    5233 fix.go:200] guest clock delta is within tolerance: 174.863388ms
	I1213 11:35:30.691390    5233 start.go:83] releasing machines lock for "ha-224000-m03", held for 37.598987831s
	I1213 11:35:30.691409    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.691545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:30.716697    5233 out.go:177] * Found network options:
	I1213 11:35:30.736372    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7
	W1213 11:35:30.757863    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.757920    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.757939    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.758810    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759058    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759249    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:30.759286    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	W1213 11:35:30.759290    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.759313    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.759449    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:35:30.759471    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.759537    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759655    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759708    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759905    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759938    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760152    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.760321    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	W1213 11:35:30.790341    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:30.790425    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:30.835439    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:35:30.835453    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:30.835523    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.850635    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:35:30.858947    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:35:30.867636    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.867708    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:35:30.876811    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.885325    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:35:30.893786    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.902226    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.910790    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:35:30.919236    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:35:30.927803    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:35:30.936377    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.943894    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:35:30.943955    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:35:30.952569    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.959891    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061578    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:35:31.081433    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:31.081517    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:35:31.100335    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.112429    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:31.127499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.138533    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.148917    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:35:31.174782    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.184889    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:31.201805    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:35:31.204856    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:35:31.212060    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:35:31.225973    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:35:31.326706    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:35:31.431909    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:35:31.431936    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:35:31.446011    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.546239    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:35:33.884526    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338279376s)
	I1213 11:35:33.884605    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:35:33.896180    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:33.907512    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:35:34.018152    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:35:34.117342    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.216289    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:35:34.229723    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:34.241050    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.333405    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:35:34.400848    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:35:34.400950    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:35:34.406614    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:35:34.406682    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:35:34.409985    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:35:34.437608    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:35:34.437696    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.456769    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.499545    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:35:34.556752    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:35:34.577782    5233 out.go:177]   - env NO_PROXY=192.169.0.6,192.169.0.7
	I1213 11:35:34.598561    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:34.598902    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:34.602518    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:34.612856    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:35:34.613037    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:34.613269    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.613292    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.625281    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51891
	I1213 11:35:34.625655    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.626009    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.626025    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.626248    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.626340    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:35:34.626428    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:35:34.626490    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:35:34.627676    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:35:34.627955    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.627988    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.640060    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51893
	I1213 11:35:34.640392    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.640716    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.640735    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.640975    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.641081    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:35:34.641190    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.8
	I1213 11:35:34.641199    5233 certs.go:194] generating shared ca certs ...
	I1213 11:35:34.641214    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:34.641369    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:35:34.641440    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:35:34.641449    5233 certs.go:256] generating profile certs ...
	I1213 11:35:34.641547    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:35:34.641650    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.f4268d28
	I1213 11:35:34.641704    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:35:34.641711    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:35:34.641732    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:35:34.641753    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:35:34.641772    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:35:34.641790    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:35:34.641809    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:35:34.641828    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:35:34.641845    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:35:34.641926    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:35:34.641977    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:34.641992    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:34.642032    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:35:34.642067    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:34.642096    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:34.642163    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:34.642196    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:34.642223    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:35:34.642243    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:35:34.642269    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:35:34.642361    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:35:34.642463    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:35:34.642554    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:35:34.642635    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:35:34.669703    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:35:34.673030    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:35:34.682641    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:35:34.686133    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:35:34.695208    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:35:34.698292    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:35:34.708147    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:35:34.711343    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:35:34.720522    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:35:34.723933    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:35:34.733200    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:35:34.736904    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:35:34.748040    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:34.768078    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:35:34.787823    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:34.807347    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:34.827367    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:35:34.847452    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:35:34.866717    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:34.886226    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:34.905392    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:34.924502    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:35:34.944848    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:35:34.964162    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:35:34.977883    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:35:34.991483    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:35:35.005083    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:35:35.018833    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:35:35.033559    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:35:35.047330    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:35:35.060953    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:35:35.065093    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:35:35.074224    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077601    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077646    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.081873    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:35:35.091167    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:35:35.100351    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103730    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103786    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.107944    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:35:35.116996    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:35:35.126132    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129577    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129642    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.133859    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:35:35.143102    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:35.146630    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:35.150908    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:35.155104    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:35.159301    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:35.163626    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:35.167845    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:35.172217    5233 kubeadm.go:934] updating node {m03 192.169.0.8 8443 v1.31.2 docker true true} ...
	I1213 11:35:35.172277    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:35.172296    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:35:35.172356    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:35:35.190873    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:35:35.190925    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:35:35.191004    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:35:35.201615    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:35:35.201692    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:35:35.209907    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:35:35.223540    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:35.237211    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:35:35.251084    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:35.254255    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:35.264617    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.363941    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.379515    5233 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:35:35.379713    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:35.453014    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:35:35.489942    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.641418    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.655240    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:35:35.655455    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:35:35.655497    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:35:35.655667    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:35.655710    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:35.655716    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:35.655722    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:35.655726    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:35.658541    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.157140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:36.157157    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.157163    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.157167    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.159862    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.160261    5233 node_ready.go:49] node "ha-224000-m03" has status "Ready":"True"
	I1213 11:35:36.160270    5233 node_ready.go:38] duration metric: took 504.598087ms for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:36.160277    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:35:36.160322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:35:36.160332    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.160339    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.160345    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.164741    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:35:36.170442    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:36.170504    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.170510    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.170516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.170519    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.172921    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.173369    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.173377    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.173383    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.173390    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.175266    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:36.671483    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.671501    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.671508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.671513    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.674268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.675049    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.675058    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.675065    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.675069    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.678278    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:37.170684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.170697    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.170703    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.170706    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.173103    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.173639    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.173649    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.173659    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.173663    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.175563    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:37.670841    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.670859    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.670867    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.670870    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.673709    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.674599    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.674609    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.674616    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.674619    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.677468    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.171983    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.172002    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.172010    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.172014    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.174562    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.175168    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.175176    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.175183    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.175186    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.177058    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:38.177428    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:38.671814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.671831    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.671839    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.671843    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.674211    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.674978    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.674987    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.674994    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.675005    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.677077    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.171353    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.171371    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.171379    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.171383    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.173885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.174765    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.174780    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.174787    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.174791    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.176969    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.672084    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.672101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.672107    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.672111    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.674182    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.674701    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.674709    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.674715    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.674719    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.676491    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.170778    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.170793    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.170801    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.170805    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.172716    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.173201    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.173209    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.173215    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.173218    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.174782    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.670537    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.670554    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.670561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.670564    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.672905    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:40.673371    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.673378    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.673384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.673388    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.675334    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.675698    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:41.170540    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.170555    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.170561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.170565    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.172610    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:41.173071    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.173079    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.173086    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.173090    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.174669    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.670954    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.670970    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.670977    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.670980    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.672906    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.673327    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.673335    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.673341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.673346    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.674840    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.171591    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.171607    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.171614    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.171626    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.173848    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.174323    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.174331    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.174336    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.174339    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.176072    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.670670    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.670685    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.670691    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.670695    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.672916    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.673334    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.673342    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.673348    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.673352    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.674953    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.171018    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.171035    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.171041    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.171044    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.173500    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.173933    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.173942    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.173948    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.173952    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.175797    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.176282    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:43.671883    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.671900    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.671909    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.671914    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.674489    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.674937    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.674945    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.674952    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.674959    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.676652    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.171731    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.171747    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.171754    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.171757    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174220    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:44.174839    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.174847    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.174853    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174858    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.176592    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.671463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.671523    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.671535    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.671543    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.674700    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:44.675156    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.675163    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.675169    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.675172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.676845    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:45.170845    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.170871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.170883    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.170890    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.174136    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:45.174847    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.174855    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.174861    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.174865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.177051    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.177329    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:45.671539    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.671565    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.671577    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.671584    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.674504    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.674930    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.674937    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.674944    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.674948    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.676902    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.171017    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.171043    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.171055    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.171064    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.174349    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:46.175105    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.175113    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.175119    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.175123    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.176671    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.670718    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.670742    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.670753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.670760    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.673727    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:46.674143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.674150    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.674155    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.674159    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.675697    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.171141    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.171167    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.171181    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.171188    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.174674    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:47.175237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.175248    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.175256    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.175283    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.177291    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.177630    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:47.670502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.670539    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.670550    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.670555    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.673105    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:47.673592    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.673603    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.673624    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.673631    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.675150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.170714    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.170743    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.170753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.170759    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.174068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:48.174871    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.174879    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.174885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.174888    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.176423    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.671508    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.671547    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.671558    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.671563    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.673769    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:48.674261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.674268    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.674274    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.674276    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.676263    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.170991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.171006    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.171015    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.171020    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173356    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.173868    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.173876    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.173882    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173893    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.175974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.671308    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.671349    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.671359    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.671375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.674049    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.674657    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.674666    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.674672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.674676    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.676408    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.676866    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:50.170526    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.170546    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.170555    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.170560    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.172951    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.173418    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.173454    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.173462    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.173467    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.175187    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:50.671268    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.671306    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.671315    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.671319    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.673518    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.674124    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.674132    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.674139    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.674142    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.675972    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.172292    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.172318    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.172329    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.172335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.175388    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:51.176242    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.176250    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.176255    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.176271    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.178034    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.672241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.672259    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.672268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.672273    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.674716    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:51.675171    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.675178    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.675184    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.675187    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.677031    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.677333    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:52.171324    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.171350    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.171394    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.171403    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.174624    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:52.175339    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.175347    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.175353    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.175356    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.176912    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.672143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.672156    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.672163    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.672166    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674142    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.674648    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.674656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.674662    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674665    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.676343    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.171789    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.171834    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.171845    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.171850    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.173997    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.174633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.174641    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.174647    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.174652    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.176489    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.671689    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.671702    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.671708    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.674629    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.675317    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.675324    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.675330    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.675335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.677039    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.677545    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:54.172269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.172296    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.172309    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.172316    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.175190    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:54.175863    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.175871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.175880    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.175884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.177695    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:54.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.671656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.671679    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.671687    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.674858    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:54.675633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.675644    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.675652    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.675659    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.677622    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.172159    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.172183    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.172195    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.172200    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.175352    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.175951    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.175961    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.175969    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.175974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.177826    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.672525    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.672548    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.672561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.672568    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676200    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.676655    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.676663    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.676669    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.679603    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.680007    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.680026    5233 pod_ready.go:82] duration metric: took 19.509731372s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680040    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680088    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:35:55.680094    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.680100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.680104    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.682544    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.683008    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.683017    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.683023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.683027    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.684867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.685203    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.685212    5233 pod_ready.go:82] duration metric: took 5.165234ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685222    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685259    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:35:55.685264    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.685270    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.685274    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.687013    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.687444    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.687452    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.687458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.687463    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689192    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.689502    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.689510    5233 pod_ready.go:82] duration metric: took 4.282723ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689517    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689546    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:35:55.689551    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.689557    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691520    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.691918    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:55.691926    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.691932    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691935    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.693585    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.694009    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.694017    5233 pod_ready.go:82] duration metric: took 4.494586ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694023    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694061    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:35:55.694066    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.694071    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.694074    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.696047    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.696583    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:55.696591    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.696597    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.696602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.698695    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.699182    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.699191    5233 pod_ready.go:82] duration metric: took 5.162024ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.699204    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.873308    5233 request.go:632] Waited for 174.059147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873409    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.873420    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.873432    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.877057    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.073941    5233 request.go:632] Waited for 196.465756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073990    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073998    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.074007    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.074015    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.076268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.076663    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.076673    5233 pod_ready.go:82] duration metric: took 377.466982ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.076681    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.272907    5233 request.go:632] Waited for 196.189621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272950    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272958    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.272967    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.272973    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.275118    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.473781    5233 request.go:632] Waited for 198.215756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.473825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.473834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.476052    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.476328    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.476337    5233 pod_ready.go:82] duration metric: took 399.655338ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.476344    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.672963    5233 request.go:632] Waited for 196.573548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673025    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673042    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.673069    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.673082    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.676053    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.874041    5233 request.go:632] Waited for 197.242072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874093    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.874112    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.874148    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.877393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.877917    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.877925    5233 pod_ready.go:82] duration metric: took 401.579167ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.877932    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.072677    5233 request.go:632] Waited for 194.687466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072807    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.072829    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.072837    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.076583    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:57.273280    5233 request.go:632] Waited for 195.960523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273356    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273364    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.273372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.273377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.275590    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:57.275864    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.275873    5233 pod_ready.go:82] duration metric: took 397.938639ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.275887    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.473240    5233 request.go:632] Waited for 197.314418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473276    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473282    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.473288    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.473293    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.479318    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:35:57.672800    5233 request.go:632] Waited for 192.751323ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672854    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672865    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.672879    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.672883    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.674679    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:57.674953    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.674964    5233 pod_ready.go:82] duration metric: took 399.075588ms for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.674971    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.872629    5233 request.go:632] Waited for 197.615913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872690    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.872698    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.872704    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.875523    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.072684    5233 request.go:632] Waited for 196.666527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072801    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072814    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.072825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.072835    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.076186    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.076572    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.076584    5233 pod_ready.go:82] duration metric: took 401.611001ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.076594    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.272566    5233 request.go:632] Waited for 195.927789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272623    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272631    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.272639    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.272646    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.275090    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.473816    5233 request.go:632] Waited for 198.141217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473894    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473905    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.473916    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.473922    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.476808    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.477275    5233 pod_ready.go:98] node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477286    5233 pod_ready.go:82] duration metric: took 400.69023ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	E1213 11:35:58.477294    5233 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477302    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.672582    5233 request.go:632] Waited for 195.231932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672629    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672638    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.672649    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.672657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.676219    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.873974    5233 request.go:632] Waited for 197.337714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874026    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874034    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.874045    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.874051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.877592    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.877988    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.878000    5233 pod_ready.go:82] duration metric: took 400.696273ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.878009    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.073381    5233 request.go:632] Waited for 195.314343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073433    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073441    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.073449    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.073455    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.075792    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.273216    5233 request.go:632] Waited for 196.949491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273267    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273283    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.273292    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.273298    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.275702    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.276247    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.276258    5233 pod_ready.go:82] duration metric: took 398.245999ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.276265    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.473693    5233 request.go:632] Waited for 197.370074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473831    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473842    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.473854    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.473862    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.477420    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.672646    5233 request.go:632] Waited for 194.659895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672759    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672771    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.672784    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.672794    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.676016    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.676434    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.676444    5233 pod_ready.go:82] duration metric: took 400.177932ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.676451    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.873284    5233 request.go:632] Waited for 196.790328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873409    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873424    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.873437    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.873446    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.876647    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.072905    5233 request.go:632] Waited for 195.872865ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073011    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073019    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.073028    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.073032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.076068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.076488    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.076498    5233 pod_ready.go:82] duration metric: took 400.046456ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.076506    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.273249    5233 request.go:632] Waited for 196.676645ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273361    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273380    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.273405    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.273414    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.276870    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.473222    5233 request.go:632] Waited for 195.664041ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473283    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473291    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.473300    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.473304    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.475794    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:36:00.476078    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.476087    5233 pod_ready.go:82] duration metric: took 399.579687ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.476096    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.674009    5233 request.go:632] Waited for 197.794547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674081    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674092    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.674106    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.674121    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.677780    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.873417    5233 request.go:632] Waited for 194.907567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873476    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873488    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.873500    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.873508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.876715    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.877199    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.877213    5233 pod_ready.go:82] duration metric: took 401.11429ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.877234    5233 pod_ready.go:39] duration metric: took 24.717168247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:36:00.877249    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:36:00.877335    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:00.889500    5233 api_server.go:72] duration metric: took 25.510179125s to wait for apiserver process to appear ...
	I1213 11:36:00.889514    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:36:00.889525    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:36:00.892661    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:36:00.892694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:36:00.892700    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.892706    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.892710    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.893221    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:36:00.893255    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:36:00.893263    5233 api_server.go:131] duration metric: took 3.744726ms to wait for apiserver health ...
	I1213 11:36:00.893268    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:36:01.073160    5233 request.go:632] Waited for 179.837088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073311    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073322    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.073333    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.073340    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.081092    5233 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1213 11:36:01.086508    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:36:01.086526    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.086530    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.086533    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.086543    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.086547    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.086550    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.086553    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.086555    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.086559    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.086565    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.086569    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.086572    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.086575    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.086579    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.086582    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.086585    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.086588    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.086591    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.086593    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.086596    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.086600    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.086602    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.086606    5233 system_pods.go:61] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.086609    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.086612    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.086616    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.086622    5233 system_pods.go:74] duration metric: took 193.351906ms to wait for pod list to return data ...
	I1213 11:36:01.086629    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:36:01.272667    5233 request.go:632] Waited for 185.987795ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272763    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272774    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.272785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.272793    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.276315    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.276400    5233 default_sa.go:45] found service account: "default"
	I1213 11:36:01.276412    5233 default_sa.go:55] duration metric: took 189.780655ms for default service account to be created ...
	I1213 11:36:01.276419    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:36:01.473526    5233 request.go:632] Waited for 197.034094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473601    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473653    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.473672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.473680    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.479025    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:36:01.484476    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:36:01.484489    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.484495    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.484499    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.484502    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.484506    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.484508    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.484511    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.484516    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.484518    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.484522    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.484524    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.484527    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.484531    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.484534    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.484538    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.484540    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.484543    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.484546    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.484549    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.484552    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.484555    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.484558    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.484561    5233 system_pods.go:89] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.484563    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.484567    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.484571    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.484576    5233 system_pods.go:126] duration metric: took 208.153776ms to wait for k8s-apps to be running ...
	I1213 11:36:01.484587    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:36:01.484655    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:36:01.495689    5233 system_svc.go:56] duration metric: took 11.101939ms WaitForService to wait for kubelet
	I1213 11:36:01.495712    5233 kubeadm.go:582] duration metric: took 26.116392116s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:36:01.495725    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:36:01.673624    5233 request.go:632] Waited for 177.853394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673726    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673737    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.673747    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.673785    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.677584    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.678344    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678354    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678360    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678364    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678367    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678369    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678372    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678375    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678378    5233 node_conditions.go:105] duration metric: took 182.650917ms to run NodePressure ...
	I1213 11:36:01.678389    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:36:01.678404    5233 start.go:255] writing updated cluster config ...
	I1213 11:36:01.701519    5233 out.go:201] 
	I1213 11:36:01.755040    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:01.755118    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.792739    5233 out.go:177] * Starting "ha-224000-m04" worker node in "ha-224000" cluster
	I1213 11:36:01.850695    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:36:01.850719    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:36:01.850830    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:36:01.850840    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:36:01.850919    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.851367    5233 start.go:360] acquireMachinesLock for ha-224000-m04: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:36:01.851417    5233 start.go:364] duration metric: took 38.664µs to acquireMachinesLock for "ha-224000-m04"
	I1213 11:36:01.851430    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:36:01.851435    5233 fix.go:54] fixHost starting: m04
	I1213 11:36:01.851670    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:36:01.851689    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:36:01.863548    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I1213 11:36:01.863864    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:36:01.864237    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:36:01.864251    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:36:01.864489    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:36:01.864595    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.864718    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:36:01.864801    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.864873    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 4360
	I1213 11:36:01.866047    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid 4360 missing from process table
	I1213 11:36:01.866070    5233 fix.go:112] recreateIfNeeded on ha-224000-m04: state=Stopped err=<nil>
	I1213 11:36:01.866083    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	W1213 11:36:01.866170    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:36:01.886701    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m04" ...
	I1213 11:36:01.927945    5233 main.go:141] libmachine: (ha-224000-m04) Calling .Start
	I1213 11:36:01.928215    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.928249    5233 main.go:141] libmachine: (ha-224000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid
	I1213 11:36:01.928315    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Using UUID 3aa2edb2-289d-46e2-9534-1f9a2dff1012
	I1213 11:36:01.954122    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Generated MAC e2:d2:09:69:a8:b4
	I1213 11:36:01.954144    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:36:01.954348    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954378    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954426    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3aa2edb2-289d-46e2-9534-1f9a2dff1012", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:36:01.954465    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3aa2edb2-289d-46e2-9534-1f9a2dff1012 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:36:01.954478    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:36:01.956069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Pid is 5375
	I1213 11:36:01.956512    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Attempt 0
	I1213 11:36:01.956527    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.956630    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:36:01.959334    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Searching for e2:d2:09:69:a8:b4 in /var/db/dhcpd_leases ...
	I1213 11:36:01.959473    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:36:01.959490    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c9a76}
	I1213 11:36:01.959506    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:36:01.959522    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:36:01.959533    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:36:01.959548    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found match: e2:d2:09:69:a8:b4
	I1213 11:36:01.959568    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetConfigRaw
	I1213 11:36:01.959573    5233 main.go:141] libmachine: (ha-224000-m04) DBG | IP: 192.169.0.9
	I1213 11:36:01.960365    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:01.960553    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.960997    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:36:01.961019    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.961190    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:01.961347    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:01.961451    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961542    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961646    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:01.961799    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:01.961972    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:01.961979    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:36:01.968096    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:36:01.976979    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:36:01.978042    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:01.978064    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:01.978076    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:01.978087    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.370264    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:36:02.370282    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:36:02.485027    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:02.485059    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:02.485069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:02.485077    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.485882    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:36:02.485893    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:36:08.339296    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:36:08.339331    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:36:08.339343    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:36:08.362659    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:36:37.019941    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:36:37.019956    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020079    5233 buildroot.go:166] provisioning hostname "ha-224000-m04"
	I1213 11:36:37.020091    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020181    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.020268    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.020362    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020446    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020550    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.020691    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.020850    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.020859    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m04 && echo "ha-224000-m04" | sudo tee /etc/hostname
	I1213 11:36:37.079455    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m04
	
	I1213 11:36:37.079470    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.079611    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.079712    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079807    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079899    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.080050    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.080202    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.080213    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:36:37.138441    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:36:37.138458    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:36:37.138471    5233 buildroot.go:174] setting up certificates
	I1213 11:36:37.138478    5233 provision.go:84] configureAuth start
	I1213 11:36:37.138489    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.138635    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:37.138758    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.138874    5233 provision.go:143] copyHostCerts
	I1213 11:36:37.138906    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.138980    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:36:37.138987    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.139126    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:36:37.139340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139389    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:36:37.139394    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139490    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:36:37.139651    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139700    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:36:37.139705    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139785    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:36:37.139956    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m04 san=[127.0.0.1 192.169.0.9 ha-224000-m04 localhost minikube]
	I1213 11:36:37.316710    5233 provision.go:177] copyRemoteCerts
	I1213 11:36:37.316783    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:36:37.316812    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.316958    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.317051    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.317152    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.317246    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:37.347920    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:36:37.347992    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:36:37.367331    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:36:37.367418    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:36:37.387377    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:36:37.387449    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:36:37.407116    5233 provision.go:87] duration metric: took 268.631983ms to configureAuth
	I1213 11:36:37.407131    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:36:37.407332    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:37.407364    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:37.407494    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.407580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.407680    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407756    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407841    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.407978    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.408110    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.408119    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:36:37.455460    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:36:37.455475    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:36:37.455568    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:36:37.455579    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.455716    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.455822    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.455928    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.456017    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.456183    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.456322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.456371    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:36:37.514210    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	Environment=NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:36:37.514229    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.514369    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.514460    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514608    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514700    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.514873    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.515015    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.515027    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:36:39.106697    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:36:39.106713    5233 machine.go:96] duration metric: took 37.146099544s to provisionDockerMachine
	I1213 11:36:39.106722    5233 start.go:293] postStartSetup for "ha-224000-m04" (driver="hyperkit")
	I1213 11:36:39.106729    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:36:39.106741    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.106958    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:36:39.106972    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.107076    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.107171    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.107250    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.107377    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.137664    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:36:39.140876    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:36:39.140886    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:36:39.140989    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:36:39.141205    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:36:39.141216    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:36:39.141482    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:36:39.148686    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:36:39.168356    5233 start.go:296] duration metric: took 61.625015ms for postStartSetup
	I1213 11:36:39.168377    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.168566    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:36:39.168580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.168694    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.168784    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.168873    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.168955    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.200288    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:36:39.200368    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:36:39.252642    5233 fix.go:56] duration metric: took 37.401602513s for fixHost
	I1213 11:36:39.252667    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.252828    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.252931    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253035    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253138    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.253294    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:39.253427    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:39.253435    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:36:39.303241    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118599.429050956
	
	I1213 11:36:39.303262    5233 fix.go:216] guest clock: 1734118599.429050956
	I1213 11:36:39.303272    5233 fix.go:229] Guest: 2024-12-13 11:36:39.429050956 -0800 PST Remote: 2024-12-13 11:36:39.252657 -0800 PST m=+195.719809020 (delta=176.393956ms)
	I1213 11:36:39.303284    5233 fix.go:200] guest clock delta is within tolerance: 176.393956ms
	I1213 11:36:39.303287    5233 start.go:83] releasing machines lock for "ha-224000-m04", held for 37.452264193s
	I1213 11:36:39.303304    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.303439    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:39.324718    5233 out.go:177] * Found network options:
	I1213 11:36:39.345593    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	W1213 11:36:39.367406    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367428    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367438    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.367453    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367872    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367964    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.368045    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:36:39.368067    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	W1213 11:36:39.368071    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368083    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368091    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.368153    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:36:39.368162    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368165    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.368280    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368311    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368396    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368417    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368502    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368516    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.368581    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	W1213 11:36:39.395349    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:36:39.395429    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:36:39.444914    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:36:39.444929    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.445000    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.460519    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:36:39.468747    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:36:39.476970    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:36:39.477028    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:36:39.485250    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.493728    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:36:39.501920    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.510067    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:36:39.518621    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:36:39.527064    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:36:39.535503    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:36:39.544105    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:36:39.551996    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:36:39.552057    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:36:39.560903    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:36:39.569057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:39.663026    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:36:39.681615    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.681707    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:36:39.701692    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.713515    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:36:39.733157    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.744420    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.755241    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:36:39.778169    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.788619    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.803742    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:36:39.806753    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:36:39.814222    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:36:39.828173    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:36:39.923220    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:36:40.025879    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:36:40.025908    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:36:40.040057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:40.139577    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:37:41.169349    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.030424073s)
	I1213 11:37:41.169444    5233 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 11:37:41.204399    5233 out.go:201] 
	W1213 11:37:41.225442    5233 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 13 19:36:37 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427068027Z" level=info msg="Starting up"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427760840Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.428340753Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.446225003Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461418150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461538159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461607016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461644040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461775643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461826393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461966604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462007624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462040126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462069720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462182838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462429601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464011795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464067757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464257837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464302280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464410649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464463860Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465390367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465443699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465555213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465597957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465634744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465705067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465941498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466071120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466113283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466145023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466176156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466211240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466250495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466285590Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466317193Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466347259Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466376937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466407325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466446395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466488362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466530329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466566314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466607503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466641823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466672212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466702609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466732812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466764575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466794248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466823748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466854140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466886668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466935305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466981167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467011716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467066705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467101883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467131499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467160087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467188157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467216598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467244211Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467402488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467606858Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467674178Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467711081Z" level=info msg="containerd successfully booted in 0.022287s"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.455600290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.476104344Z" level=info msg="Loading containers: start."
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.568941234Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.144331314Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.199597389Z" level=info msg="Loading containers: done."
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210939061Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210976128Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210994749Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.211089971Z" level=info msg="Daemon has completed initialization"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231136019Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 19:36:39 ha-224000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231344731Z" level=info msg="API listen on [::]:2376"
	Dec 13 19:36:40 ha-224000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.277223387Z" level=info msg="Processing signal 'terminated'"
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278137307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278251358Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278340377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278256739Z" level=info msg="Daemon shutdown complete"
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:41 ha-224000-m04 dockerd[1113]: time="2024-12-13T19:36:41.322763293Z" level=info msg="Starting up"
	Dec 13 19:37:41 ha-224000-m04 dockerd[1113]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 11:37:41.225503    5233 out.go:270] * 
	W1213 11:37:41.226123    5233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:37:41.267588    5233 out.go:201] 
	
	
	==> Docker <==
	Dec 13 19:35:17 ha-224000 dockerd[1176]: time="2024-12-13T19:35:17.296092113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233837137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233911634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233925821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233995450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239334702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239439690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239450304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239575939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.205775306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207076446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207155526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207356928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206616412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206773456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206817690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206899370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457128150Z" level=info msg="shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1170]: time="2024-12-13T19:35:57.457607034Z" level=info msg="ignoring event" container=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457838474Z" level=warning msg="cleaning up after shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457953841Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213145624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213212633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213225596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213337090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b961eac98708b       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   93cd09024c535       storage-provisioner
	f1b285481948b       50415e5d05f05                                                                                         2 minutes ago        Running             kindnet-cni               1                   06f29a39c508a       kindnet-687js
	38ee6f8374b04       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   6ed2d05ea2409       busybox-7dff88458-wbknx
	5f565c400b733       505d571f5fd56                                                                                         2 minutes ago        Running             kube-proxy                1                   31cf2effc73d7       kube-proxy-9wj7k
	5050cecf942e2       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   645aca2ea936b       coredns-7c65d6cfc9-5ds6r
	df8ddf72aa14f       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   8cef794a507b6       coredns-7c65d6cfc9-sswfx
	dba699a298586       0486b6c53a1b5                                                                                         3 minutes ago        Running             kube-controller-manager   2                   da5d4e126c370       kube-controller-manager-ha-224000
	2c7e84811a057       9499c9960544e                                                                                         3 minutes ago        Running             kube-apiserver            2                   6651a1d0a89d4       kube-apiserver-ha-224000
	d34c8e7a98686       f1c87c24be687                                                                                         3 minutes ago        Running             kube-vip                  0                   53478f9b98c3e       kube-vip-ha-224000
	0457a6eb9fce4       9499c9960544e                                                                                         3 minutes ago        Exited              kube-apiserver            1                   6651a1d0a89d4       kube-apiserver-ha-224000
	78030050b83d7       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   48f05aec7d5f4       etcd-ha-224000
	8cce3a8cb1260       847c7bc1a5418                                                                                         3 minutes ago        Running             kube-scheduler            1                   d605ad9f8c9f5       kube-scheduler-ha-224000
	dda62d21c5c2f       0486b6c53a1b5                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   da5d4e126c370       kube-controller-manager-ha-224000
	89334114a6e1e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago        Exited              busybox                   0                   ddc328d7180f5       busybox-7dff88458-wbknx
	cf4b333fe5f49       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   f18799b2271c7       coredns-7c65d6cfc9-sswfx
	f16805d6df5d4       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   653774da684e6       coredns-7c65d6cfc9-5ds6r
	532326a9b719a       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              11 minutes ago       Exited              kindnet-cni               0                   989ccdb8aa000       kindnet-687js
	94480a2dd9b5e       505d571f5fd56                                                                                         11 minutes ago       Exited              kube-proxy                0                   1cd5ef5ffe1e4       kube-proxy-9wj7k
	ad0dc00c3676d       2e96e5913fc06                                                                                         11 minutes ago       Exited              etcd                      0                   6121511eb160b       etcd-ha-224000
	63c39e011231f       847c7bc1a5418                                                                                         11 minutes ago       Exited              kube-scheduler            0                   2046a92fb05bb       kube-scheduler-ha-224000
	
	
	==> coredns [5050cecf942e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:39218 - 50752 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001691935s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:35345->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41938 - 7905 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001636827s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:38380->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41437 - 45110 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.001832207s
	[INFO] 127.0.0.1:44515 - 54662 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 4.002458371s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:41265->192.169.0.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[446765318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30005ms):
	Trace[446765318]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[446765318]: [30.005577524s] [30.005577524s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[393764073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30006ms):
	Trace[393764073]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[393764073]: [30.006232941s] [30.006232941s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[531717446]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.543) (total time: 30002ms):
	Trace[531717446]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:35:47.544)
	Trace[531717446]: [30.002274294s] [30.002274294s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [cf4b333fe5f4] <==
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320449s
	[INFO] 10.244.2.2:56489 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.010940453s
	[INFO] 10.244.2.2:53656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010500029s
	[INFO] 10.244.1.2:40275 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235614s
	[INFO] 10.244.0.4:54501 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000070742s
	[INFO] 10.244.2.2:54661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099137s
	[INFO] 10.244.2.2:53526 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010894436s
	[INFO] 10.244.2.2:43837 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093129s
	[INFO] 10.244.2.2:48144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01305588s
	[INFO] 10.244.2.2:37929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083719s
	[INFO] 10.244.2.2:56915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109123s
	[INFO] 10.244.2.2:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064664s
	[INFO] 10.244.1.2:36673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000091432s
	[INFO] 10.244.1.2:34220 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009472s
	[INFO] 10.244.1.2:38397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007902s
	[INFO] 10.244.0.4:44003 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000090711s
	[INFO] 10.244.0.4:37919 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060032s
	[INFO] 10.244.0.4:57710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104441s
	[INFO] 10.244.2.2:36812 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000142147s
	[INFO] 10.244.1.2:43077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013892s
	[INFO] 10.244.0.4:44480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107424s
	[INFO] 10.244.0.4:50392 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013146s
	[INFO] 10.244.0.4:57954 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090837s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df8ddf72aa14] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35560 - 57542 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003265442s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:57849->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:36876 - 8169 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 2.001203837s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:33115->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:55518 - 55981 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003381935s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:35637->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:51113 - 20297 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.000906393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469351415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30002ms):
	Trace[469351415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:35:47.541)
	Trace[469351415]: [30.002900538s] [30.002900538s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[235804559]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30004ms):
	Trace[235804559]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:35:47.543)
	Trace[235804559]: [30.004014569s] [30.004014569s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222840766]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.542) (total time: 30002ms):
	Trace[222840766]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:35:47.544)
	Trace[222840766]: [30.002499147s] [30.002499147s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f16805d6df5d] <==
	[INFO] 10.244.0.4:50423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616257s
	[INFO] 10.244.0.4:51571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066308s
	[INFO] 10.244.0.4:55425 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000034221s
	[INFO] 10.244.0.4:33674 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091937s
	[INFO] 10.244.0.4:60931 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037068s
	[INFO] 10.244.2.2:51638 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103452s
	[INFO] 10.244.2.2:33033 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088733s
	[INFO] 10.244.2.2:51032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145099s
	[INFO] 10.244.2.2:58035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067066s
	[INFO] 10.244.1.2:35671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137338s
	[INFO] 10.244.1.2:43244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083679s
	[INFO] 10.244.1.2:49096 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008999s
	[INFO] 10.244.1.2:50254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108638s
	[INFO] 10.244.0.4:50170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091228s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158647s
	[INFO] 10.244.0.4:51342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086722s
	[INFO] 10.244.2.2:37837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076855s
	[INFO] 10.244.2.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100477s
	[INFO] 10.244.2.2:48539 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006865s
	[INFO] 10.244.1.2:34571 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102259s
	[INFO] 10.244.1.2:48156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010558s
	[INFO] 10.244.1.2:56382 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000094051s
	[INFO] 10.244.0.4:56589 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000045096s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-224000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T11_26_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:26:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-224000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c482b8662654c3a869b1ecefe5cf9ee
	  System UUID:                b2cf45fe-0000-0000-a947-282a845e5503
	  Boot ID:                    a3b32e80-0a2c-43a6-967b-82a2f6e8eef5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wbknx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 coredns-7c65d6cfc9-5ds6r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-7c65d6cfc9-sswfx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-ha-224000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-687js                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-224000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-224000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9wj7k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-224000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-224000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  Starting                 2m15s                kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeReady                11m                  kubelet          Node ha-224000 status is now: NodeReady
	  Normal  RegisteredNode           10m                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           9m5s                 node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           2m                   node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	
	
	Name:               ha-224000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_27_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:27:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-224000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a69af53a722464e92c469155271604e
	  System UUID:                573e4bce-0000-0000-aba3-b379863bb495
	  Boot ID:                    ae7bc928-29f4-4c6b-bd14-f4e659fc8097
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l97s5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 etcd-ha-224000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-c6kgd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-224000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-224000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9wsr4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-224000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-224000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m7s                   kube-proxy       
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 5m9s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5m9s                   kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 5m8s                   kubelet          Node ha-224000-m02 has been rebooted, boot id: 77378fb8-5f4b-4218-9a14-15ce228529ff
	  Normal   NodeHasSufficientMemory  5m8s                   kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m8s                   kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m8s                   kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m18s (x8 over 3m19s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m18s (x8 over 3m19s)  kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m18s (x7 over 3m19s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           2m                     node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	
	
	Name:               ha-224000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_28_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:35:35 +0000   Fri, 13 Dec 2024 19:35:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:35:35 +0000   Fri, 13 Dec 2024 19:35:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:35:35 +0000   Fri, 13 Dec 2024 19:35:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:35:35 +0000   Fri, 13 Dec 2024 19:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-224000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c7c9726374443a791fb4f1ce0548772
	  System UUID:                a9494f04-0000-0000-8e19-b8e4ec0a7cc4
	  Boot ID:                    b7abd244-70c3-4ab7-8619-f40279662fea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7vlsm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 etcd-ha-224000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m11s
	  kube-system                 kindnet-kpjh5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m13s
	  kube-system                 kube-apiserver-ha-224000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-controller-manager-ha-224000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-proxy-gmw9z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-ha-224000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-vip-ha-224000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m4s                   kube-proxy       
	  Normal   Starting                 9m8s                   kube-proxy       
	  Normal   NodeHasSufficientPID     9m13s (x7 over 9m13s)  kubelet          Node ha-224000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m13s                  node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m13s (x8 over 9m13s)  kubelet          Node ha-224000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m13s (x8 over 9m13s)  kubelet          Node ha-224000-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m10s                  node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	  Normal   NodeNotReady             2m26s                  node-controller  Node ha-224000-m03 status is now: NodeNotReady
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m8s (x3 over 2m8s)    kubelet          Node ha-224000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x3 over 2m8s)    kubelet          Node ha-224000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x3 over 2m8s)    kubelet          Node ha-224000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m8s (x2 over 2m8s)    kubelet          Node ha-224000-m03 has been rebooted, boot id: b7abd244-70c3-4ab7-8619-f40279662fea
	  Normal   NodeReady                2m8s (x2 over 2m8s)    kubelet          Node ha-224000-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m                     node-controller  Node ha-224000-m03 event: Registered Node ha-224000-m03 in Controller
	
	
	Name:               ha-224000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_31_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:31:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:32:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-224000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e9882ffc62647968bea651d5ce1f097
	  System UUID:                3aa246e2-0000-0000-9534-1f9a2dff1012
	  Boot ID:                    0f3125e8-e3e0-4806-91cb-fd0eaa4f608f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g6ss2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-proxy-7b8ch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m19s (x2 over 6m19s)  kubelet          Node ha-224000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m19s (x2 over 6m19s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m19s (x2 over 6m19s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeReady                5m56s                  kubelet          Node ha-224000-m04 status is now: NodeReady
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m6s                   node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m6s                   node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeNotReady             2m26s                  node-controller  Node ha-224000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m                     node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.035991] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008030] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.835151] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.809793] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.216222] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.358309] systemd-fstab-generator[460]: Ignoring "noauto" option for root device
	[  +0.105099] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +1.959406] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +0.254010] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.104125] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +0.104856] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.058611] kauditd_printk_skb: 149 callbacks suppressed
	[  +2.414891] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.103198] systemd-fstab-generator[1400]: Ignoring "noauto" option for root device
	[  +0.113797] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[  +0.119494] systemd-fstab-generator[1427]: Ignoring "noauto" option for root device
	[  +0.429719] systemd-fstab-generator[1587]: Ignoring "noauto" option for root device
	[  +6.882724] kauditd_printk_skb: 172 callbacks suppressed
	[Dec13 19:34] kauditd_printk_skb: 40 callbacks suppressed
	[Dec13 19:35] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.801033] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [78030050b83d] <==
	{"level":"warn","ts":"2024-12-13T19:35:19.027903Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:19.028033Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:19.506737Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:19.506812Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:23.029684Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:23.029773Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:24.507751Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:24.507880Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:27.032320Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:27.032433Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:29.508657Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:29.508678Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:31.034456Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:31.034511Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:34.509077Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:34.509189Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"afd89b9ec393451","rtt":"0s","error":"dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:35.035693Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.8:2380/version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-13T19:35:35.035890Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"afd89b9ec393451","error":"Get \"https://192.169.0.8:2380/version\": dial tcp 192.169.0.8:2380: connect: connection refused"}
	{"level":"info","ts":"2024-12-13T19:35:36.914446Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.914506Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.915577Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.968970Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-13T19:35:36.969147Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.970728Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-13T19:35:36.970799Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	
	
	==> etcd [ad0dc00c3676] <==
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.919286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"911.52519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-13T19:33:15.919296Z","caller":"traceutil/trace.go:171","msg":"trace[646065576] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"911.536819ms","start":"2024-12-13T19:33:15.007757Z","end":"2024-12-13T19:33:15.919293Z","steps":["trace[646065576] 'agreement among raft nodes before linearized reading'  (duration: 911.525741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:33:15.919307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:33:15.007742Z","time spent":"911.561075ms","remote":"127.0.0.1:57240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.953693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-13T19:33:15.953754Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-13T19:33:15.953797Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"e397b3b47bd62ab9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-13T19:33:15.956144Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956235Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956328Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956354Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956412Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956443Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956450Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956468Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956907Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957016Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.960175Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960341Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960352Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-224000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.6:2380"],"advertise-client-urls":["https://192.169.0.6:2379"]}
	
	
	==> kernel <==
	 19:37:44 up 4 min,  0 users,  load average: 0.42, 0.38, 0.18
	Linux ha-224000 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [532326a9b719] <==
	I1213 19:32:38.955729       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.951745       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:48.951937       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.952237       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:48.952297       1 main.go:301] handling current node
	I1213 19:32:48.952312       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:48.952320       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:32:48.952519       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:48.952573       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.952815       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:58.952836       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.953197       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:58.953257       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:58.953413       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:58.953484       1 main.go:301] handling current node
	I1213 19:32:58.953506       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:58.953519       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.953874       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:33:08.953928       1 main.go:301] handling current node
	I1213 19:33:08.954191       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:33:08.954234       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.955460       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:33:08.955468       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:33:08.955667       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:33:08.955695       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f1b285481948] <==
	I1213 19:37:11.243993       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:21.243211       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:21.243714       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:21.244434       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:21.244642       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:21.244975       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:21.245123       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:21.245378       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:21.245522       1 main.go:301] handling current node
	I1213 19:37:31.243688       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:31.243758       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:31.243918       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:31.244043       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:31.244392       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:31.244432       1 main.go:301] handling current node
	I1213 19:37:31.244443       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:31.244449       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.249106       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:41.249448       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:41.249978       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:41.250111       1 main.go:301] handling current node
	I1213 19:37:41.250163       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:41.250282       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.250439       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:41.250519       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0457a6eb9fce] <==
	I1213 19:33:49.820720       1 options.go:228] external host was not specified, using 192.169.0.6
	I1213 19:33:49.826974       1 server.go:142] Version: v1.31.2
	I1213 19:33:49.828876       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.369348       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1213 19:33:50.373560       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:33:50.376229       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1213 19:33:50.376292       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1213 19:33:50.376453       1 instance.go:232] Using reconciler: lease
	W1213 19:34:10.367496       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1213 19:34:10.367678       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1213 19:34:10.377527       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2c7e84811a05] <==
	I1213 19:34:33.858755       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1213 19:34:33.858846       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1213 19:34:33.932383       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1213 19:34:33.934311       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 19:34:33.944721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 19:34:33.944939       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 19:34:33.945156       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:34:33.945214       1 policy_source.go:224] refreshing policies
	I1213 19:34:33.946446       1 shared_informer.go:320] Caches are synced for configmaps
	I1213 19:34:33.950262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 19:34:33.950654       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 19:34:33.952135       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1213 19:34:33.958706       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1213 19:34:33.958952       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1213 19:34:33.959051       1 aggregator.go:171] initial CRD sync complete...
	I1213 19:34:33.959071       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 19:34:33.959175       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 19:34:33.959196       1 cache.go:39] Caches are synced for autoregister controller
	W1213 19:34:33.972653       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1213 19:34:33.974278       1 controller.go:615] quota admission added evaluator for: endpoints
	I1213 19:34:33.985761       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1213 19:34:33.990131       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1213 19:34:34.005835       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 19:34:34.842581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1213 19:34:35.103753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	
	
	==> kube-controller-manager [dba699a29858] <==
	I1213 19:35:18.255099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.195µs"
	I1213 19:35:18.273430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.527µs"
	I1213 19:35:22.630807       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:35:27.515655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:29.399814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.322223ms"
	I1213 19:35:29.399864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.332µs"
	I1213 19:35:32.722494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:35.860699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:35:35.873416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:35:36.745378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.094µs"
	I1213 19:35:37.488940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:35:39.297752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.723659ms"
	I1213 19:35:39.297831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.52µs"
	I1213 19:35:43.044900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:43.142912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:55.552893       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.553121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.725541ms"
	I1213 19:35:55.553280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.548µs"
	I1213 19:35:55.553635       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.571600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.492248ms"
	I1213 19:35:55.576690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.23µs"
	I1213 19:35:55.577745       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.578045       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.625981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="11.797733ms"
	I1213 19:35:55.626922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.294µs"
	
	
	==> kube-controller-manager [dda62d21c5c2] <==
	I1213 19:33:49.641671       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:33:50.338076       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1213 19:33:50.338108       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.340327       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:33:50.340428       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:33:50.340697       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1213 19:33:50.340882       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:34:11.384884       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.6:8443/healthz\": dial tcp 192.169.0.6:8443: connect: connection refused"
	
	
	==> kube-proxy [5f565c400b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:35:27.545116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:35:27.561280       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:35:27.561547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:35:27.593343       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:35:27.593524       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:35:27.593695       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:35:27.599613       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:35:27.600762       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:35:27.600792       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:35:27.603008       1 config.go:199] "Starting service config controller"
	I1213 19:35:27.603210       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:35:27.603407       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:35:27.603433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:35:27.604612       1 config.go:328] "Starting node config controller"
	I1213 19:35:27.604643       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:35:27.704590       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:35:27.704694       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:35:27.704710       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [94480a2dd9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:26:14.203354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:26:14.213097       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:26:14.213174       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:26:14.241202       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:26:14.241246       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:26:14.241263       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:26:14.244275       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:26:14.244855       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:26:14.244882       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:26:14.246052       1 config.go:199] "Starting service config controller"
	I1213 19:26:14.246200       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:26:14.246348       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:26:14.246374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:26:14.246424       1 config.go:328] "Starting node config controller"
	I1213 19:26:14.246441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:26:14.347309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:26:14.347360       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:26:14.347669       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [63c39e011231] <==
	E1213 19:28:30.473242       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	E1213 19:28:30.474646       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d5770b31-991f-43c2-82a4-f0051e25f645(kube-system/kindnet-kpjh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.474870       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4b9ed970-5ad3-4b15-a714-24f0f06632c8(kube-system/kube-proxy-gmw9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gmw9z"
	E1213 19:28:30.475888       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kpjh5\": pod kindnet-kpjh5 is already assigned to node \"ha-224000-m03\"" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.476671       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-jxwhq"
	I1213 19:28:30.476729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	I1213 19:28:30.475988       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kpjh5" node="ha-224000-m03"
	E1213 19:28:30.475897       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gmw9z\": pod kube-proxy-gmw9z is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-gmw9z"
	I1213 19:28:30.478106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gmw9z" node="ha-224000-m03"
	E1213 19:28:59.957880       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	E1213 19:28:59.957902       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	I1213 19:28:59.957915       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-l97s5" node="ha-224000-m02"
	E1213 19:29:00.063963       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-zs25q is already present in the active queue" pod="default/busybox-7dff88458-zs25q"
	E1213 19:29:00.081842       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-zs25q\" not found" pod="default/busybox-7dff88458-zs25q"
	E1213 19:31:24.582665       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	E1213 19:31:24.582727       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-7b8ch"
	E1213 19:31:24.582830       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8ccp4" node="ha-224000-m04"
	E1213 19:31:24.582939       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-8ccp4"
	E1213 19:31:24.583359       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qqm9r" node="ha-224000-m04"
	E1213 19:31:24.583404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" pod="kube-system/kindnet-qqm9r"
	I1213 19:31:24.586044       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	I1213 19:33:15.853518       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 19:33:15.859188       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:33:15.859357       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1213 19:33:15.864811       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8cce3a8cb126] <==
	W1213 19:34:33.926966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 19:34:33.927009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:34:33.927159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:34:33.927384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:34:33.927490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.929630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929845       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 19:34:33.929886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:34:33.930195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 19:34:33.930473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:34:33.930610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:34:33.931026       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1213 19:34:55.098739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:35:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:35:42 ha-224000 kubelet[1594]: I1213 19:35:42.186925    1594 scope.go:117] "RemoveContainer" containerID="901560cab05afd01ac1f97679993cf515730a563066592c72d364d4f023faa11"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.639988    1594 scope.go:117] "RemoveContainer" containerID="6e865c58301353a95a17f9b7cc0efd9f449785d4fa6d23de4eae2d1f5ef7aa69"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.640662    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: E1213 19:35:57.640842    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: I1213 19:36:09.158547    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: E1213 19:36:09.158675    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: I1213 19:36:20.159152    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: E1213 19:36:20.159302    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: I1213 19:36:31.158111    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: E1213 19:36:31.158349    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.158392    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: E1213 19:36:42.198509    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.216134    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:37:42 ha-224000 kubelet[1594]: E1213 19:37:42.172559    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-224000 -n ha-224000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-224000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (289.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 node delete m03 -v=7 --alsologtostderr: (6.950292855s)
ha_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr: exit status 2 (382.13376ms)

                                                
                                                
-- stdout --
	ha-224000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-224000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-224000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:37:52.753810    5491 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:37:52.754166    5491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:37:52.754172    5491 out.go:358] Setting ErrFile to fd 2...
	I1213 11:37:52.754176    5491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:37:52.754358    5491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:37:52.754566    5491 out.go:352] Setting JSON to false
	I1213 11:37:52.754592    5491 mustload.go:65] Loading cluster: ha-224000
	I1213 11:37:52.754634    5491 notify.go:220] Checking for updates...
	I1213 11:37:52.754972    5491 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:37:52.754994    5491 status.go:174] checking status of ha-224000 ...
	I1213 11:37:52.755442    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.755482    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.767379    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I1213 11:37:52.767666    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.768054    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.768062    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.768309    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.768418    5491 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:37:52.768505    5491 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:37:52.768587    5491 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:37:52.769778    5491 status.go:371] ha-224000 host status = "Running" (err=<nil>)
	I1213 11:37:52.769792    5491 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:37:52.770051    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.770075    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.784934    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I1213 11:37:52.785321    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.785671    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.785687    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.785893    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.785995    5491 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:37:52.786090    5491 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:37:52.786352    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.786374    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.798050    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51984
	I1213 11:37:52.798391    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.798719    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.798736    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.798959    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.799073    5491 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:37:52.799247    5491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:37:52.799266    5491 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:37:52.799919    5491 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:37:52.800115    5491 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:37:52.800295    5491 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:37:52.800641    5491 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:37:52.832978    5491 ssh_runner.go:195] Run: systemctl --version
	I1213 11:37:52.837441    5491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:37:52.849628    5491 kubeconfig.go:125] found "ha-224000" server: "https://192.169.0.254:8443"
	I1213 11:37:52.849653    5491 api_server.go:166] Checking apiserver status ...
	I1213 11:37:52.849706    5491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:52.862005    5491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2339/cgroup
	W1213 11:37:52.870700    5491 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2339/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:37:52.870760    5491 ssh_runner.go:195] Run: ls
	I1213 11:37:52.873935    5491 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1213 11:37:52.877068    5491 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1213 11:37:52.877078    5491 status.go:463] ha-224000 apiserver status = Running (err=<nil>)
	I1213 11:37:52.877085    5491 status.go:176] ha-224000 status: &{Name:ha-224000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:37:52.877097    5491 status.go:174] checking status of ha-224000-m02 ...
	I1213 11:37:52.877391    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.877413    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.889072    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51988
	I1213 11:37:52.889376    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.889713    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.889728    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.889952    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.890079    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:37:52.890178    5491 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:37:52.890269    5491 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:37:52.891519    5491 status.go:371] ha-224000-m02 host status = "Running" (err=<nil>)
	I1213 11:37:52.891529    5491 host.go:66] Checking if "ha-224000-m02" exists ...
	I1213 11:37:52.891785    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.891807    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.903391    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51990
	I1213 11:37:52.903717    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.904051    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.904064    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.904277    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.904372    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:37:52.904483    5491 host.go:66] Checking if "ha-224000-m02" exists ...
	I1213 11:37:52.904751    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.904781    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.916289    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51992
	I1213 11:37:52.916601    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.916953    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.916967    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.917180    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:52.917296    5491 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:37:52.917442    5491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:37:52.917453    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:37:52.917533    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:37:52.917612    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:37:52.917711    5491 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:37:52.917794    5491 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:37:52.951277    5491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:37:52.962097    5491 kubeconfig.go:125] found "ha-224000" server: "https://192.169.0.254:8443"
	I1213 11:37:52.962112    5491 api_server.go:166] Checking apiserver status ...
	I1213 11:37:52.962164    5491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:52.973179    5491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup
	W1213 11:37:52.980675    5491 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:37:52.980741    5491 ssh_runner.go:195] Run: ls
	I1213 11:37:52.983991    5491 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1213 11:37:52.987197    5491 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1213 11:37:52.987209    5491 status.go:463] ha-224000-m02 apiserver status = Running (err=<nil>)
	I1213 11:37:52.987213    5491 status.go:176] ha-224000-m02 status: &{Name:ha-224000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:37:52.987236    5491 status.go:174] checking status of ha-224000-m04 ...
	I1213 11:37:52.987523    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:52.987543    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:52.998997    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51996
	I1213 11:37:52.999305    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:52.999651    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:52.999667    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:52.999925    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:53.000035    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:37:53.000141    5491 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:37:53.000227    5491 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:37:53.001452    5491 status.go:371] ha-224000-m04 host status = "Running" (err=<nil>)
	I1213 11:37:53.001460    5491 host.go:66] Checking if "ha-224000-m04" exists ...
	I1213 11:37:53.001721    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:53.001751    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:53.013358    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51998
	I1213 11:37:53.013682    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:53.014010    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:53.014021    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:53.014253    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:53.014392    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:37:53.014499    5491 host.go:66] Checking if "ha-224000-m04" exists ...
	I1213 11:37:53.014775    5491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:37:53.014810    5491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:37:53.026167    5491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52000
	I1213 11:37:53.026476    5491 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:37:53.026846    5491 main.go:141] libmachine: Using API Version  1
	I1213 11:37:53.026864    5491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:37:53.027077    5491 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:37:53.027185    5491 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:37:53.027356    5491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:37:53.027367    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:37:53.027449    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:37:53.027532    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:37:53.027680    5491 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:37:53.027754    5491 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:37:53.055527    5491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:37:53.066917    5491 status.go:176] ha-224000-m04 status: &{Name:ha-224000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-224000 -n ha-224000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 logs -n 25: (3.336216701s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m04 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp testdata/cp-test.txt                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000:/home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000 sudo cat                                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m02:/home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03:/home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m03 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-224000 node stop m02 -v=7                                                                                                 | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-224000 node start m02 -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000 -v=7                                                                                                       | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-224000 -v=7                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:33 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-224000 --wait=true -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:33 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:37 PST |                     |
	| node    | ha-224000 node delete m03 -v=7                                                                                               | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:37 PST | 13 Dec 24 11:37 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 11:33:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:33:23.556546    5233 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:33:23.556761    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556766    5233 out.go:358] Setting ErrFile to fd 2...
	I1213 11:33:23.556770    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556939    5233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:33:23.558493    5233 out.go:352] Setting JSON to false
	I1213 11:33:23.588845    5233 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1973,"bootTime":1734116430,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:33:23.588936    5233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:33:23.610818    5233 out.go:177] * [ha-224000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:33:23.652607    5233 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:33:23.652667    5233 notify.go:220] Checking for updates...
	I1213 11:33:23.695155    5233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:23.716580    5233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:33:23.758076    5233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:33:23.778447    5233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:33:23.799542    5233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:33:23.821105    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:23.821299    5233 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:33:23.821877    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.821927    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:23.834367    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51814
	I1213 11:33:23.834740    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:23.835143    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:23.835152    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:23.835371    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:23.835545    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:23.867473    5233 out.go:177] * Using the hyperkit driver based on existing profile
	I1213 11:33:23.909252    5233 start.go:297] selected driver: hyperkit
	I1213 11:33:23.909282    5233 start.go:901] validating driver "hyperkit" against &{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.909534    5233 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:33:23.909725    5233 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.909981    5233 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:33:23.922579    5233 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:33:23.929434    5233 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.929452    5233 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:33:23.935885    5233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:33:23.935924    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:23.935972    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:23.936044    5233 start.go:340] cluster config:
	{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.936181    5233 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.978382    5233 out.go:177] * Starting "ha-224000" primary control-plane node in "ha-224000" cluster
	I1213 11:33:23.999338    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:23.999406    5233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 11:33:23.999429    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:23.999602    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:23.999621    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:23.999813    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.000837    5233 start.go:360] acquireMachinesLock for ha-224000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:24.000950    5233 start.go:364] duration metric: took 87.843µs to acquireMachinesLock for "ha-224000"
	I1213 11:33:24.000984    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:24.001006    5233 fix.go:54] fixHost starting: 
	I1213 11:33:24.001462    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:24.001491    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:24.013395    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I1213 11:33:24.013731    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:24.014113    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:24.014132    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:24.014335    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:24.014453    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.014563    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:33:24.014649    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.014739    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 4112
	I1213 11:33:24.015879    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.015946    5233 fix.go:112] recreateIfNeeded on ha-224000: state=Stopped err=<nil>
	I1213 11:33:24.015971    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	W1213 11:33:24.016061    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:24.037410    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000" ...
	I1213 11:33:24.058353    5233 main.go:141] libmachine: (ha-224000) Calling .Start
	I1213 11:33:24.058516    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.058530    5233 main.go:141] libmachine: (ha-224000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid
	I1213 11:33:24.059997    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.060006    5233 main.go:141] libmachine: (ha-224000) DBG | pid 4112 is in state "Stopped"
	I1213 11:33:24.060020    5233 main.go:141] libmachine: (ha-224000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid...
	I1213 11:33:24.060148    5233 main.go:141] libmachine: (ha-224000) DBG | Using UUID b2cf51fb-709d-45fe-a947-282a845e5503
	I1213 11:33:24.195839    5233 main.go:141] libmachine: (ha-224000) DBG | Generated MAC e2:1f:26:f2:db:4d
	I1213 11:33:24.195876    5233 main.go:141] libmachine: (ha-224000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:24.196013    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196037    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196083    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b2cf51fb-709d-45fe-a947-282a845e5503", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:24.196130    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b2cf51fb-709d-45fe-a947-282a845e5503 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:24.196149    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:24.198377    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Pid is 5248
	I1213 11:33:24.198751    5233 main.go:141] libmachine: (ha-224000) DBG | Attempt 0
	I1213 11:33:24.198766    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.198839    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:33:24.200071    5233 main.go:141] libmachine: (ha-224000) DBG | Searching for e2:1f:26:f2:db:4d in /var/db/dhcpd_leases ...
	I1213 11:33:24.200197    5233 main.go:141] libmachine: (ha-224000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:24.200237    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:24.200259    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:24.200275    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:33:24.200287    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9849}
	I1213 11:33:24.200302    5233 main.go:141] libmachine: (ha-224000) DBG | Found match: e2:1f:26:f2:db:4d
	I1213 11:33:24.200309    5233 main.go:141] libmachine: (ha-224000) DBG | IP: 192.169.0.6
	I1213 11:33:24.200346    5233 main.go:141] libmachine: (ha-224000) Calling .GetConfigRaw
	I1213 11:33:24.201046    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:24.201273    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.201998    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:24.202010    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.202152    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:24.202253    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:24.202345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202460    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202575    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:24.202734    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:24.202918    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:24.202926    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:24.209830    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:24.275074    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:24.275977    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.275998    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.276018    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.276028    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.664445    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:24.664462    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:24.779029    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.779050    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.779061    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.779087    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.779925    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:24.779935    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:30.509300    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:30.509378    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:30.509389    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:30.535654    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:33:35.263286    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:33:35.263305    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263484    5233 buildroot.go:166] provisioning hostname "ha-224000"
	I1213 11:33:35.263495    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263594    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.263690    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.263795    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263879    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263974    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.264111    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.264249    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.264257    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000 && echo "ha-224000" | sudo tee /etc/hostname
	I1213 11:33:35.330220    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000
	
	I1213 11:33:35.330242    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.330385    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.330487    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330579    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330683    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.330825    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.330962    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.330973    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:33:35.395347    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:33:35.395367    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:33:35.395380    5233 buildroot.go:174] setting up certificates
	I1213 11:33:35.395390    5233 provision.go:84] configureAuth start
	I1213 11:33:35.395396    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.395536    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:35.395626    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.395729    5233 provision.go:143] copyHostCerts
	I1213 11:33:35.395759    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395813    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:33:35.395824    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395941    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:33:35.396166    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396198    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:33:35.396203    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396305    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:33:35.396479    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396511    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:33:35.396516    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396585    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:33:35.396750    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000 san=[127.0.0.1 192.169.0.6 ha-224000 localhost minikube]
	I1213 11:33:35.608012    5233 provision.go:177] copyRemoteCerts
	I1213 11:33:35.608088    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:33:35.608110    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.608273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.608376    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.608484    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.608616    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:35.643782    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:33:35.643849    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:33:35.663504    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:33:35.663563    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 11:33:35.683076    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:33:35.683137    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:33:35.702561    5233 provision.go:87] duration metric: took 307.16247ms to configureAuth
	I1213 11:33:35.702573    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:33:35.702742    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:35.702756    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:35.702886    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.702984    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.703073    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703154    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703252    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.703383    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.703507    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.703514    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:33:35.761527    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:33:35.761539    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:33:35.761614    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:33:35.761631    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.761761    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.761867    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.761952    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.762029    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.762180    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.762322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.762369    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:33:35.829448    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:33:35.829473    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.829611    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.829710    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829804    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829882    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.830037    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.830180    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.830192    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:33:37.506714    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:33:37.506731    5233 machine.go:96] duration metric: took 13.304830015s to provisionDockerMachine
	I1213 11:33:37.506744    5233 start.go:293] postStartSetup for "ha-224000" (driver="hyperkit")
	I1213 11:33:37.506752    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:33:37.506763    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.506964    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:33:37.506981    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.507084    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.507184    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.507273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.507359    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.549053    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:33:37.553822    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:33:37.553837    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:33:37.553928    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:33:37.554104    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:33:37.554111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:33:37.554283    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:33:37.567654    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:37.594179    5233 start.go:296] duration metric: took 87.426295ms for postStartSetup
	I1213 11:33:37.594207    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.594408    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:33:37.594421    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.594508    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.594590    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.594724    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.594816    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.628799    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:33:37.628871    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:33:37.659933    5233 fix.go:56] duration metric: took 13.659041433s for fixHost
	I1213 11:33:37.659954    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.660095    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.660190    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660283    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660359    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.660499    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:37.660647    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:37.660654    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:33:37.718237    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118417.855687365
	
	I1213 11:33:37.718250    5233 fix.go:216] guest clock: 1734118417.855687365
	I1213 11:33:37.718256    5233 fix.go:229] Guest: 2024-12-13 11:33:37.855687365 -0800 PST Remote: 2024-12-13 11:33:37.659944 -0800 PST m=+14.144143612 (delta=195.743365ms)
	I1213 11:33:37.718279    5233 fix.go:200] guest clock delta is within tolerance: 195.743365ms
	I1213 11:33:37.718284    5233 start.go:83] releasing machines lock for "ha-224000", held for 13.717432141s
	I1213 11:33:37.718302    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718458    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:37.718557    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718855    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718959    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.719072    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:33:37.719100    5233 ssh_runner.go:195] Run: cat /version.json
	I1213 11:33:37.719104    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719118    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719221    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719232    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719360    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719454    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719480    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719588    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.719609    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.801992    5233 ssh_runner.go:195] Run: systemctl --version
	I1213 11:33:37.807211    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:33:37.811454    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:33:37.811510    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:33:37.823724    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:33:37.823735    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:37.823838    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:37.842317    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:33:37.851247    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:33:37.859919    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:33:37.859977    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:33:37.868699    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.877385    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:33:37.885895    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.894631    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:33:37.903433    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:33:37.912080    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:33:37.920838    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:33:37.929686    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:33:37.937526    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:33:37.937575    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:33:37.946343    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:33:37.954321    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.055814    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:33:38.074538    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:38.074638    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:33:38.087031    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.101085    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:33:38.116013    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.126951    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.137488    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:33:38.158482    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.168678    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:38.183844    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:33:38.186730    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:33:38.193926    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:33:38.207186    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:33:38.306381    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:33:38.409182    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:33:38.409284    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:33:38.423485    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.520298    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:33:40.856468    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336161165s)
	I1213 11:33:40.856560    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:33:40.867785    5233 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 11:33:40.881291    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:40.891767    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:33:40.985833    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:33:41.094364    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.203166    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:33:41.217499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:41.228676    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.322265    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:33:41.392321    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:33:41.392423    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:33:41.396866    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:33:41.396929    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:33:41.400110    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:33:41.428478    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:33:41.428562    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.446343    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.486067    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:33:41.486118    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:41.486570    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:33:41.490428    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.500921    5233 kubeadm.go:883] updating cluster {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:33:41.501009    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:41.501080    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.514302    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.514313    5233 docker.go:619] Images already preloaded, skipping extraction
	I1213 11:33:41.514404    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.528088    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.528111    5233 cache_images.go:84] Images are preloaded, skipping loading
	I1213 11:33:41.528123    5233 kubeadm.go:934] updating node { 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1213 11:33:41.528195    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:33:41.528276    5233 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 11:33:41.563286    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:41.563301    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:41.563314    5233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 11:33:41.563331    5233 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.6 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-224000 NodeName:ha-224000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:33:41.563411    5233 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-224000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.6"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.6"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:33:41.563429    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:33:41.563502    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:33:41.577356    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:33:41.577431    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:33:41.577503    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:33:41.586076    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:33:41.586130    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 11:33:41.593693    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1213 11:33:41.607111    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:33:41.620717    5233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1213 11:33:41.634595    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:33:41.648138    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:33:41.651088    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.660611    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.764209    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:33:41.776920    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.6
	I1213 11:33:41.776935    5233 certs.go:194] generating shared ca certs ...
	I1213 11:33:41.776947    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.777111    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:33:41.777172    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:33:41.777182    5233 certs.go:256] generating profile certs ...
	I1213 11:33:41.777268    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:33:41.777289    5233 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848
	I1213 11:33:41.777307    5233 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.6 192.169.0.7 192.169.0.8 192.169.0.254]
	I1213 11:33:41.924008    5233 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 ...
	I1213 11:33:41.924024    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848: {Name:mk14c8bdd605a32a15c7e818d08d02d64b9be917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925000    5233 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 ...
	I1213 11:33:41.925011    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848: {Name:mk0673ccf9e28132db2b00d320fea4d73482d286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925290    5233 certs.go:381] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt
	I1213 11:33:41.925479    5233 certs.go:385] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key
	I1213 11:33:41.925688    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:33:41.925697    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:33:41.925721    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:33:41.925741    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:33:41.925761    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:33:41.925780    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:33:41.925802    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:33:41.925823    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:33:41.925841    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:33:41.925928    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:33:41.925965    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:33:41.925979    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:33:41.926013    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:33:41.926042    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:33:41.926077    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:33:41.926146    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:41.926184    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:33:41.926207    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:41.926225    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:33:41.927710    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:33:41.951166    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:33:41.975929    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:33:42.015520    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:33:42.051250    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:33:42.097395    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:33:42.139215    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:33:42.167922    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:33:42.188284    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:33:42.207671    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:33:42.226762    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:33:42.245781    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:33:42.259332    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:33:42.263629    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:33:42.272753    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276074    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276126    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.280400    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:33:42.289318    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:33:42.298635    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301936    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301986    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.306272    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:33:42.315219    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:33:42.324178    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327536    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327583    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.331821    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:33:42.340849    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:33:42.344177    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:33:42.348774    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:33:42.353021    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:33:42.357742    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:33:42.361999    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:33:42.366226    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:33:42.370715    5233 kubeadm.go:392] StartCluster: {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:42.370839    5233 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 11:33:42.382402    5233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:33:42.390619    5233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 11:33:42.390630    5233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 11:33:42.390688    5233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:33:42.399169    5233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:42.399486    5233 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-224000" does not appear in /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.399573    5233 kubeconfig.go:62] /Users/jenkins/minikube-integration/20090-800/kubeconfig needs updating (will repair): [kubeconfig missing "ha-224000" cluster setting kubeconfig missing "ha-224000" context setting]
	I1213 11:33:42.399754    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.400139    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.400368    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:33:42.400704    5233 cert_rotation.go:140] Starting client certificate rotation controller
	I1213 11:33:42.400887    5233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:33:42.408731    5233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.6
	I1213 11:33:42.408748    5233 kubeadm.go:597] duration metric: took 18.113581ms to restartPrimaryControlPlane
	I1213 11:33:42.408754    5233 kubeadm.go:394] duration metric: took 38.045507ms to StartCluster
	I1213 11:33:42.408764    5233 settings.go:142] acquiring lock: {Name:mk0626482d1a77203bd9c1b6d841b6780f4771c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.408852    5233 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.409247    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.409470    5233 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:33:42.409483    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:33:42.409500    5233 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:33:42.409614    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.452999    5233 out.go:177] * Enabled addons: 
	I1213 11:33:42.473889    5233 addons.go:510] duration metric: took 64.391249ms for enable addons: enabled=[]
	I1213 11:33:42.473995    5233 start.go:246] waiting for cluster config update ...
	I1213 11:33:42.474008    5233 start.go:255] writing updated cluster config ...
	I1213 11:33:42.496132    5233 out.go:201] 
	I1213 11:33:42.517570    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.517711    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.541038    5233 out.go:177] * Starting "ha-224000-m02" control-plane node in "ha-224000" cluster
	I1213 11:33:42.583131    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:42.583188    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:42.583372    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:42.583392    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:42.583516    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.584724    5233 start.go:360] acquireMachinesLock for ha-224000-m02: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:42.584832    5233 start.go:364] duration metric: took 83.288µs to acquireMachinesLock for "ha-224000-m02"
	I1213 11:33:42.584859    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:42.584868    5233 fix.go:54] fixHost starting: m02
	I1213 11:33:42.585263    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:42.585289    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:42.597490    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51838
	I1213 11:33:42.598009    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:42.598520    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:42.598537    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:42.598854    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:42.598984    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.599156    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:33:42.599250    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.599342    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5143
	I1213 11:33:42.600521    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.600553    5233 fix.go:112] recreateIfNeeded on ha-224000-m02: state=Stopped err=<nil>
	I1213 11:33:42.600561    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	W1213 11:33:42.600657    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:42.642952    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m02" ...
	I1213 11:33:42.664177    5233 main.go:141] libmachine: (ha-224000-m02) Calling .Start
	I1213 11:33:42.664494    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.664558    5233 main.go:141] libmachine: (ha-224000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid
	I1213 11:33:42.666694    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.666707    5233 main.go:141] libmachine: (ha-224000-m02) DBG | pid 5143 is in state "Stopped"
	I1213 11:33:42.666723    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid...
	I1213 11:33:42.667115    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Using UUID 573e64b1-a821-4bce-aba3-b379863bb495
	I1213 11:33:42.694947    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Generated MAC fa:54:eb:53:13:e6
	I1213 11:33:42.695001    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:42.695241    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695304    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695353    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "573e64b1-a821-4bce-aba3-b379863bb495", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:42.695424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 573e64b1-a821-4bce-aba3-b379863bb495 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:42.695442    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:42.697074    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Pid is 5263
	I1213 11:33:42.697519    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Attempt 0
	I1213 11:33:42.697548    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.697612    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:33:42.699596    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Searching for fa:54:eb:53:13:e6 in /var/db/dhcpd_leases ...
	I1213 11:33:42.699713    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:42.699733    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:33:42.699753    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:42.699767    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:42.699789    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found match: fa:54:eb:53:13:e6
	I1213 11:33:42.699807    5233 main.go:141] libmachine: (ha-224000-m02) DBG | IP: 192.169.0.7
	I1213 11:33:42.699845    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetConfigRaw
	I1213 11:33:42.700566    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:33:42.700747    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.701233    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:42.701243    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.701360    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:33:42.701474    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:33:42.701583    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701690    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701786    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:33:42.701932    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:42.702072    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:33:42.702079    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:42.708424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:42.717944    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:42.718853    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:42.718881    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:42.718896    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:42.718909    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.109099    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:43.109114    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:43.223848    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:43.223866    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:43.223877    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:43.223884    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.224755    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:43.224765    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:48.997042    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:48.997098    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:48.997108    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:49.020830    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:34:17.779287    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:34:17.779302    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779433    5233 buildroot.go:166] provisioning hostname "ha-224000-m02"
	I1213 11:34:17.779441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779556    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.779664    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.779746    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779835    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.780083    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.780222    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.780230    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m02 && echo "ha-224000-m02" | sudo tee /etc/hostname
	I1213 11:34:17.853511    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m02
	
	I1213 11:34:17.853529    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.853672    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.853764    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853853    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853936    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.854073    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.854254    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.854268    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:34:17.919686    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:34:17.919701    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:34:17.919711    5233 buildroot.go:174] setting up certificates
	I1213 11:34:17.919720    5233 provision.go:84] configureAuth start
	I1213 11:34:17.919727    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.919878    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:17.919996    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.920105    5233 provision.go:143] copyHostCerts
	I1213 11:34:17.920136    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920185    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:34:17.920199    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920354    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:34:17.920585    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920616    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:34:17.920621    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920688    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:34:17.920873    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920909    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:34:17.920914    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920981    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:34:17.921606    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m02 san=[127.0.0.1 192.169.0.7 ha-224000-m02 localhost minikube]
	I1213 11:34:18.018851    5233 provision.go:177] copyRemoteCerts
	I1213 11:34:18.018930    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:34:18.018950    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.019110    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.019222    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.019333    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.019447    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:18.056757    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:34:18.056824    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:34:18.076340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:34:18.076402    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:34:18.095849    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:34:18.095918    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:34:18.115722    5233 provision.go:87] duration metric: took 195.866505ms to configureAuth
	I1213 11:34:18.115736    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:34:18.115914    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:18.115934    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:18.116067    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.116155    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.116267    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116362    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116456    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.116584    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.116708    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.116716    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:34:18.177000    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:34:18.177013    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:34:18.177102    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:34:18.177115    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.177250    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.177339    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177434    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177521    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.177668    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.177802    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.177848    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:34:18.247535    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:34:18.247560    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.247701    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.247799    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247889    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247972    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.248144    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.248281    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.248294    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:34:19.945302    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:34:19.945316    5233 machine.go:96] duration metric: took 37.234619508s to provisionDockerMachine
	I1213 11:34:19.945325    5233 start.go:293] postStartSetup for "ha-224000-m02" (driver="hyperkit")
	I1213 11:34:19.945338    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:34:19.945348    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:19.945560    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:34:19.945574    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:19.945673    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:19.945782    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:19.945867    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:19.945970    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:19.983485    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:34:19.986722    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:34:19.986734    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:34:19.986812    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:34:19.986953    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:34:19.986959    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:34:19.987126    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:34:19.994240    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:20.014210    5233 start.go:296] duration metric: took 68.83207ms for postStartSetup
	I1213 11:34:20.014230    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.014422    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:34:20.014435    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.014537    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.014623    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.014704    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.014788    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.051647    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:34:20.051721    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:34:20.083772    5233 fix.go:56] duration metric: took 37.489367071s for fixHost
	I1213 11:34:20.083797    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.083942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.084018    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084114    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084207    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.084348    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:20.084490    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:20.084497    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:34:20.144388    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118460.015290153
	
	I1213 11:34:20.144404    5233 fix.go:216] guest clock: 1734118460.015290153
	I1213 11:34:20.144410    5233 fix.go:229] Guest: 2024-12-13 11:34:20.015290153 -0800 PST Remote: 2024-12-13 11:34:20.083787 -0800 PST m=+56.558492323 (delta=-68.496847ms)
	I1213 11:34:20.144420    5233 fix.go:200] guest clock delta is within tolerance: -68.496847ms
	I1213 11:34:20.144423    5233 start.go:83] releasing machines lock for "ha-224000-m02", held for 37.550011232s
	I1213 11:34:20.144441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.144584    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:20.167177    5233 out.go:177] * Found network options:
	I1213 11:34:20.188040    5233 out.go:177]   - NO_PROXY=192.169.0.6
	W1213 11:34:20.210009    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.210052    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.210927    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211209    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211385    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:34:20.211422    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	W1213 11:34:20.211452    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.211589    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:34:20.211610    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.211651    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211865    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211907    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212101    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212120    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212285    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212303    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.212458    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	W1213 11:34:20.245031    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:34:20.245108    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:34:20.305744    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:34:20.305779    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.305887    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.321917    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:34:20.330318    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:34:20.338449    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.338512    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:34:20.346961    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.355388    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:34:20.363629    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.371829    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:34:20.380410    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:34:20.388794    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:34:20.397231    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:34:20.405722    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:34:20.413168    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:34:20.413221    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:34:20.421725    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:34:20.429719    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.529241    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:34:20.543578    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.543670    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:34:20.554987    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.567690    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:34:20.581251    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.592466    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.603581    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:34:20.625283    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.635539    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.650656    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:34:20.653582    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:34:20.660675    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:34:20.674213    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:34:20.766147    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:34:20.880974    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.880996    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:34:20.895110    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.996896    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:34:23.324011    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325927019s)
	I1213 11:34:23.324083    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:34:23.334876    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.345278    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:34:23.440468    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:34:23.550842    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.658765    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:34:23.672210    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.683300    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.776286    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:34:23.841785    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:34:23.841892    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:34:23.847288    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:34:23.847368    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:34:23.850479    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:34:23.877340    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:34:23.877457    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.894304    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.933199    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:34:23.953827    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:34:23.975731    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:23.976228    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:34:23.980868    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:23.990424    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:34:23.990607    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:23.990844    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:23.990865    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.002451    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51860
	I1213 11:34:24.002790    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.003114    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.003125    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.003331    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.003469    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:34:24.003590    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:24.003653    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:34:24.004855    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:34:24.005135    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:24.005159    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.016676    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51862
	I1213 11:34:24.017013    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.017327    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.017339    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.017581    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.017710    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:34:24.017828    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.7
	I1213 11:34:24.017838    5233 certs.go:194] generating shared ca certs ...
	I1213 11:34:24.017849    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:34:24.017995    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:34:24.018055    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:34:24.018064    5233 certs.go:256] generating profile certs ...
	I1213 11:34:24.018159    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:34:24.018227    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.d29f1a5b
	I1213 11:34:24.018283    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:34:24.018291    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:34:24.018312    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:34:24.018338    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:34:24.018360    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:34:24.018382    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:34:24.018401    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:34:24.018420    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:34:24.018438    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:34:24.018527    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:34:24.018569    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:34:24.018578    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:34:24.018614    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:34:24.018649    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:34:24.018679    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:34:24.018787    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:24.018831    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.018854    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.018872    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.018902    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:34:24.018999    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:34:24.019091    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:34:24.019182    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:34:24.019261    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:34:24.046997    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:34:24.050721    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:34:24.059570    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:34:24.062693    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:34:24.071272    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:34:24.074372    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:34:24.083223    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:34:24.086307    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:34:24.095588    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:34:24.098711    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:34:24.107784    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:34:24.110902    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:34:24.120480    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:34:24.141070    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:34:24.160878    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:34:24.180920    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:34:24.200790    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:34:24.220908    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:34:24.240966    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:34:24.260343    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:34:24.279661    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:34:24.298866    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:34:24.318211    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:34:24.337602    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:34:24.351230    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:34:24.364930    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:34:24.378548    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:34:24.392045    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:34:24.405741    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:34:24.419366    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:34:24.433162    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:34:24.437460    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:34:24.446555    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449893    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449949    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.454195    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:34:24.463315    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:34:24.472398    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475806    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475869    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.480014    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:34:24.488936    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:34:24.498028    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501370    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501420    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.505749    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:34:24.514801    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:34:24.518173    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:34:24.522615    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:34:24.526939    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:34:24.531281    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:34:24.535563    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:34:24.539842    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:34:24.544160    5233 kubeadm.go:934] updating node {m02 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1213 11:34:24.544222    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:34:24.544239    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:34:24.544284    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:34:24.557092    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:34:24.557131    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:34:24.557204    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:34:24.566007    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:34:24.566093    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:34:24.575831    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:34:24.589369    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:34:24.603027    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:34:24.616380    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:34:24.619250    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:24.628866    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.726853    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.741435    5233 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:34:24.741619    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:24.762788    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:34:24.783602    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.924600    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.940595    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:34:24.940795    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:34:24.940831    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:34:24.940998    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:24.941077    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:24.941083    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:24.941090    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:24.941095    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:25.941784    5233 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I1213 11:34:25.941996    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:25.942010    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:25.942024    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:25.942031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:26.943551    5233 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I1213 11:34:26.943636    5233 node_ready.go:53] error getting node "ha-224000-m02": Get "https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02": dial tcp 192.169.0.6:8443: connect: connection refused
	I1213 11:34:26.943705    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:26.943715    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:26.943726    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:26.943733    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.736951    5233 round_trippers.go:574] Response Status: 200 OK in 6791 milliseconds
	I1213 11:34:33.738522    5233 node_ready.go:49] node "ha-224000-m02" has status "Ready":"True"
	I1213 11:34:33.738535    5233 node_ready.go:38] duration metric: took 8.794739664s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:33.738543    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:33.738582    5233 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:34:33.738592    5233 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:34:33.738642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:33.738649    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.738656    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.738661    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.750539    5233 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1213 11:34:33.759150    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.759215    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:34:33.759222    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.759229    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.759233    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.789285    5233 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1213 11:34:33.789752    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.789760    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.789766    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.789770    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799141    5233 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1213 11:34:33.799424    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.799433    5233 pod_ready.go:82] duration metric: took 40.258328ms for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799440    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799505    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:34:33.799511    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.799516    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799520    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.807914    5233 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1213 11:34:33.808397    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.808404    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.808415    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.808419    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.813376    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.813909    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.813919    5233 pod_ready.go:82] duration metric: took 14.470417ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813926    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813967    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:34:33.813972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.813978    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.813982    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.817802    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.818281    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.818288    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.818294    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.818299    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.823207    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.823485    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.823495    5233 pod_ready.go:82] duration metric: took 9.562079ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823503    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823545    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:34:33.823551    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.823557    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.823561    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.827781    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.828190    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:33.828197    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.828204    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.828207    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.831785    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.832141    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.832151    5233 pod_ready.go:82] duration metric: took 8.641657ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832159    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832202    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:34:33.832207    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.832213    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.832219    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.836265    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.939780    5233 request.go:632] Waited for 102.859328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939849    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939857    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.939865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.939871    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.946873    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:34:33.947618    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.947630    5233 pod_ready.go:82] duration metric: took 115.439259ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.947652    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.138902    5233 request.go:632] Waited for 191.1655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138938    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138982    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.138990    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.138993    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.142609    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.339564    5233 request.go:632] Waited for 196.386923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339652    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.339688    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.339702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.342232    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:34.342592    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.342602    5233 pod_ready.go:82] duration metric: took 394.853592ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.342609    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.540215    5233 request.go:632] Waited for 197.501487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540359    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540371    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.540384    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.540391    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.544062    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.740387    5233 request.go:632] Waited for 195.768993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740457    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740463    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.740470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.740474    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.742464    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:34.742759    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.742770    5233 pod_ready.go:82] duration metric: took 400.065678ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.742777    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.940360    5233 request.go:632] Waited for 197.497147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940426    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940432    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.940438    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.940442    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.942974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.139848    5233 request.go:632] Waited for 196.049551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139909    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139915    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.139922    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.139927    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.142601    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.143154    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.143165    5233 pod_ready.go:82] duration metric: took 400.297853ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.143173    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.340241    5233 request.go:632] Waited for 196.968883ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340288    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340294    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.340301    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.340305    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.344403    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:35.539580    5233 request.go:632] Waited for 194.599751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539614    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539618    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.539625    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.539628    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.541865    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.542227    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.542236    5233 pod_ready.go:82] duration metric: took 398.973916ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.542244    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.739398    5233 request.go:632] Waited for 197.024136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739550    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739562    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.739574    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.739585    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.743222    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:35.939505    5233 request.go:632] Waited for 195.770633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939554    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939560    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.939566    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.939572    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.941922    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:36.140471    5233 request.go:632] Waited for 97.089364ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140522    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140532    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.140544    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.140552    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.143672    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.339675    5233 request.go:632] Waited for 195.459387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339785    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339799    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.339811    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.339818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.344343    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:36.543195    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.543214    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.543223    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.543228    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.546614    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.740875    5233 request.go:632] Waited for 193.633171ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740939    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740951    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.740963    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.740974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.745536    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:37.043269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.043284    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.043293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.043297    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.046460    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.139384    5233 request.go:632] Waited for 92.520369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139445    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139451    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.139457    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.139461    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.141508    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.544411    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.544439    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.544458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.544464    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.548035    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.548715    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.548726    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.548734    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.548740    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.551007    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.551414    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:38.043335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.043360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.043371    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.043377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.046826    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:38.047379    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.047390    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.047397    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.047402    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.049403    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:38.543656    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.543682    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.543702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.543709    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546343    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:38.546787    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.546797    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.546803    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546807    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.548405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.043375    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.043397    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.043405    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.043409    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046060    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.046784    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.046792    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.046798    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046801    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.048453    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.543079    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.543094    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.543100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.543103    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.545426    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.545991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.545999    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.546005    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.546008    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.548059    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:40.044134    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.044192    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.044205    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.044212    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048181    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:40.048585    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.048594    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.048600    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048603    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.050402    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:40.050801    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:40.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.543772    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.543785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.543818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.547875    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:40.548358    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.548366    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.548372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.548375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.550043    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.043443    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.043501    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.043516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.043523    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.047137    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.047586    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.047593    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.047598    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.047602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.049298    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.544147    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.544170    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.544182    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.544190    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548033    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.548573    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.548581    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.548587    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548592    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.550267    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.044241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.044256    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.044264    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.044268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.046885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.047355    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.047363    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.047369    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.047373    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.049099    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.543762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.543771    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.543776    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546146    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.546521    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.546529    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.546535    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546538    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.548300    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.548618    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:43.043836    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.043862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.043875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.043884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.047393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:43.048068    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.048075    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.048082    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.048085    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.049985    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:43.544065    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.544086    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.544097    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.544117    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.547029    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:43.547638    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.547645    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.547651    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.547657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.549301    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.044961    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.044988    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.045023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.045031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.048485    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.049062    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.049070    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.049076    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.049081    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.050740    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.545903    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.545928    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.545945    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.545956    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.549955    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.550463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.550470    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.550476    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.550479    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.552158    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.552451    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:45.045945    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.045972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.045984    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.045991    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.049387    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:45.050098    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.050109    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.050117    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.050123    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.051738    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:45.544140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.544159    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.544168    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.544172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.546873    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:45.547352    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.547360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.547366    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.547370    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.548773    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.043998    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.044020    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.044032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.044038    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047292    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.047783    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.047790    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.047795    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047798    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.049310    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.544571    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.544597    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.544609    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.544616    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.548134    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.548745    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.548755    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.548762    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.548771    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.550544    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.044994    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.045015    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.045026    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.045032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.048476    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.049178    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.049189    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.049197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.049202    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.050811    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.051136    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:47.545774    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.545796    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.545809    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.545816    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.549567    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.550282    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.550292    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.550308    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.550313    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.552150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.044237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.044252    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.044262    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.044267    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.046593    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:48.047034    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.047041    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.047047    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.047051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.048719    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.544694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.544762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.544781    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.544788    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548156    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:48.548805    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.548813    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.548819    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548830    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.550405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.045819    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.045842    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.045854    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.045864    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.049109    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.049810    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.049821    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.049828    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.049834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.051675    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.052058    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:49.546343    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.546370    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.546384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.546391    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550058    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.550673    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.550684    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.550692    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550697    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.552559    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.044335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.044361    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.044373    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.044380    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.048285    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.048872    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.048879    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.048885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.048889    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.050497    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.544806    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.544862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.544875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.544885    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.548751    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.549398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.549406    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.549412    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.549416    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.550966    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.551275    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.551284    5233 pod_ready.go:82] duration metric: took 15.007121321s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551291    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551328    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:34:50.551333    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.551338    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.551343    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.553068    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.553502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.553509    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.553514    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.553517    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.555304    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.555632    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.555640    5233 pod_ready.go:82] duration metric: took 4.343987ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555647    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555686    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:34:50.555691    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.555696    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.555699    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.557601    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.557970    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:34:50.557977    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.557983    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.557986    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.559417    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.559883    5233 pod_ready.go:93] pod "kube-proxy-7b8ch" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.559891    5233 pod_ready.go:82] duration metric: took 4.238545ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559899    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559932    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:34:50.559949    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.559956    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.559960    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.562004    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:50.562348    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:50.562356    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.562361    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.562365    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.563914    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.564222    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.564231    5233 pod_ready.go:82] duration metric: took 4.326466ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564237    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:34:50.564274    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.564280    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.564293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.565929    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.566322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.566328    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.566334    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.566337    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.567867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.568197    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.568208    5233 pod_ready.go:82] duration metric: took 3.96239ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.568215    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.745519    5233 request.go:632] Waited for 177.216442ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745569    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745584    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.745599    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.745607    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.748965    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.946816    5233 request.go:632] Waited for 197.362494ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946935    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946944    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.946958    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.946964    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.950494    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.950832    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.950846    5233 pod_ready.go:82] duration metric: took 382.598257ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.950855    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.146433    5233 request.go:632] Waited for 195.515852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146519    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146528    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.146539    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.146545    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.150256    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.346180    5233 request.go:632] Waited for 195.336158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346304    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346314    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.346325    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.346333    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.350059    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.350701    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.350714    5233 pod_ready.go:82] duration metric: took 399.82535ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.350723    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.546175    5233 request.go:632] Waited for 195.389456ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546301    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546322    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.546341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.546357    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.549469    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.745754    5233 request.go:632] Waited for 195.890122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745865    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745871    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.745877    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.745881    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.747825    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:51.748179    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.748191    5233 pod_ready.go:82] duration metric: took 397.435321ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.748198    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.945402    5233 request.go:632] Waited for 197.127949ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945442    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945447    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.945453    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.945457    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.948002    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:52.146346    5233 request.go:632] Waited for 197.812373ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146446    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146458    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.146470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.146477    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.150176    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.150503    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:52.150514    5233 pod_ready.go:82] duration metric: took 402.286111ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:52.150525    5233 pod_ready.go:39] duration metric: took 18.409559513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:52.150552    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:34:52.150642    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.164316    5233 api_server.go:72] duration metric: took 27.417579599s to wait for apiserver process to appear ...
	I1213 11:34:52.164330    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:34:52.164347    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:34:52.168889    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:34:52.168929    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:34:52.168934    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.168946    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.168950    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.169508    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:34:52.169593    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:34:52.169605    5233 api_server.go:131] duration metric: took 5.269383ms to wait for apiserver health ...
	I1213 11:34:52.169610    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:34:52.346116    5233 request.go:632] Waited for 176.438003ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346270    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.346282    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.346288    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.351411    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:34:52.356738    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:34:52.356755    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.356759    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.356761    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.356765    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.356768    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.356771    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.356774    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.356776    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.356780    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.356782    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.356785    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.356788    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.356791    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.356793    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.356796    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.356799    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.356802    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.356804    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.356807    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.356810    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.356813    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.356815    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.356818    5233 system_pods.go:61] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.356821    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.356823    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.356826    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.356830    5233 system_pods.go:74] duration metric: took 187.204101ms to wait for pod list to return data ...
	I1213 11:34:52.356836    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:34:52.547123    5233 request.go:632] Waited for 190.17926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547175    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547184    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.547197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.547205    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.550987    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.551153    5233 default_sa.go:45] found service account: "default"
	I1213 11:34:52.551169    5233 default_sa.go:55] duration metric: took 194.315508ms for default service account to be created ...
	I1213 11:34:52.551177    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:34:52.745633    5233 request.go:632] Waited for 194.336495ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745749    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745782    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.745804    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.745815    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.750592    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:52.755864    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:34:52.755877    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.755881    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.755884    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.755887    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.755890    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.755893    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.755896    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.755899    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.755902    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.755905    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.755908    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.755911    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.755914    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.755917    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.755919    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.755923    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.755926    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.755929    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.755932    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.755935    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.755938    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.755941    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.755944    5233 system_pods.go:89] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.755946    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.755952    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.755956    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.755960    5233 system_pods.go:126] duration metric: took 204.766483ms to wait for k8s-apps to be running ...
	I1213 11:34:52.755970    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:34:52.756038    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:34:52.767749    5233 system_svc.go:56] duration metric: took 11.776634ms WaitForService to wait for kubelet
	I1213 11:34:52.767765    5233 kubeadm.go:582] duration metric: took 28.020992834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:34:52.767792    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:34:52.945101    5233 request.go:632] Waited for 177.223908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945150    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945158    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.945170    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.945176    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.949117    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.950061    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950074    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950086    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950090    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950094    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950097    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950099    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950102    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950105    5233 node_conditions.go:105] duration metric: took 182.296841ms to run NodePressure ...
	I1213 11:34:52.950114    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:34:52.950132    5233 start.go:255] writing updated cluster config ...
	I1213 11:34:52.972494    5233 out.go:201] 
	I1213 11:34:52.993694    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:52.993820    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.016586    5233 out.go:177] * Starting "ha-224000-m03" control-plane node in "ha-224000" cluster
	I1213 11:34:53.090440    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:34:53.090478    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:34:53.090696    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:34:53.090718    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:34:53.090850    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.091713    5233 start.go:360] acquireMachinesLock for ha-224000-m03: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:34:53.091822    5233 start.go:364] duration metric: took 84.906µs to acquireMachinesLock for "ha-224000-m03"
	I1213 11:34:53.091846    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:34:53.091854    5233 fix.go:54] fixHost starting: m03
	I1213 11:34:53.092290    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:53.092327    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:53.104639    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I1213 11:34:53.104960    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:53.105280    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:53.105294    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:53.105531    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:53.105628    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.105732    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetState
	I1213 11:34:53.105817    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.105891    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 4216
	I1213 11:34:53.107018    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.107070    5233 fix.go:112] recreateIfNeeded on ha-224000-m03: state=Stopped err=<nil>
	I1213 11:34:53.107090    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	W1213 11:34:53.107166    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:34:53.128583    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m03" ...
	I1213 11:34:53.170463    5233 main.go:141] libmachine: (ha-224000-m03) Calling .Start
	I1213 11:34:53.170757    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.170820    5233 main.go:141] libmachine: (ha-224000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid
	I1213 11:34:53.173341    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.173354    5233 main.go:141] libmachine: (ha-224000-m03) DBG | pid 4216 is in state "Stopped"
	I1213 11:34:53.173370    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid...
	I1213 11:34:53.173814    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Using UUID a949994f-ed60-4f04-8e19-b8e4ec0a7cc4
	I1213 11:34:53.198944    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Generated MAC a6:90:90:dd:31:4c
	I1213 11:34:53.198971    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:34:53.199150    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199192    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:34:53.199276    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a949994f-ed60-4f04-8e19-b8e4ec0a7cc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:34:53.199299    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:34:53.201829    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Pid is 5320
	I1213 11:34:53.202230    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Attempt 0
	I1213 11:34:53.202250    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.202308    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 5320
	I1213 11:34:53.203502    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Searching for a6:90:90:dd:31:4c in /var/db/dhcpd_leases ...
	I1213 11:34:53.203593    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:34:53.203623    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:34:53.203647    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:34:53.203666    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:34:53.203681    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:34:53.203694    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found match: a6:90:90:dd:31:4c
	I1213 11:34:53.203705    5233 main.go:141] libmachine: (ha-224000-m03) DBG | IP: 192.169.0.8
	I1213 11:34:53.203714    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetConfigRaw
	I1213 11:34:53.204410    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:34:53.204623    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.205075    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:34:53.205084    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.205213    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:34:53.205302    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:34:53.205398    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205497    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205650    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:34:53.205789    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:53.205928    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:34:53.205935    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:34:53.212601    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:34:53.221560    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:34:53.222531    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.222558    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.222580    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.222599    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.612220    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:34:53.612234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:34:53.727037    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.727057    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.727094    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.727117    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.727874    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:34:53.727886    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:34:59.521710    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:34:59.521832    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:34:59.521841    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:34:59.545358    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:35:28.268303    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:35:28.268318    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268453    5233 buildroot.go:166] provisioning hostname "ha-224000-m03"
	I1213 11:35:28.268464    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.268633    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.268718    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268794    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268890    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.269047    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.269192    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.269201    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m03 && echo "ha-224000-m03" | sudo tee /etc/hostname
	I1213 11:35:28.331907    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m03
	
	I1213 11:35:28.331923    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.332060    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.332169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332280    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332367    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.332526    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.332658    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.332669    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:28.389916    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:28.389931    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:35:28.389961    5233 buildroot.go:174] setting up certificates
	I1213 11:35:28.389971    5233 provision.go:84] configureAuth start
	I1213 11:35:28.389982    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.390117    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:28.390208    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.390313    5233 provision.go:143] copyHostCerts
	I1213 11:35:28.390344    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390394    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:35:28.390401    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390555    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:35:28.390787    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390820    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:35:28.390825    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390910    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:35:28.391077    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391106    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:35:28.391111    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391228    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:35:28.391418    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m03 san=[127.0.0.1 192.169.0.8 ha-224000-m03 localhost minikube]
	I1213 11:35:28.615259    5233 provision.go:177] copyRemoteCerts
	I1213 11:35:28.615322    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:28.615337    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.615483    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.615599    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.615704    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.615808    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:28.648163    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:35:28.648235    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:35:28.668111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:35:28.668178    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:28.688091    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:35:28.688163    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:28.707920    5233 provision.go:87] duration metric: took 317.933618ms to configureAuth
	I1213 11:35:28.707937    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:35:28.708107    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:28.708120    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:28.708271    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.708384    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.708472    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708567    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708672    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.708792    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.708915    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.708923    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:35:28.759762    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:35:28.759775    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:35:28.759854    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:35:28.759870    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.760005    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.760093    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760190    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760274    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.760438    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.760606    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.760655    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:35:28.823874    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:35:28.823891    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.824044    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.824161    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824266    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824376    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.824572    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.824732    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.824746    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:35:30.486456    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:35:30.486475    5233 machine.go:96] duration metric: took 37.280827239s to provisionDockerMachine
	I1213 11:35:30.486485    5233 start.go:293] postStartSetup for "ha-224000-m03" (driver="hyperkit")
	I1213 11:35:30.486499    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:30.486509    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.486716    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:30.486731    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.486828    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.486916    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.487008    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.487103    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.519400    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:30.522965    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:35:30.522976    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:35:30.523076    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:35:30.523222    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:35:30.523229    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:35:30.523407    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:30.531672    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:30.550850    5233 start.go:296] duration metric: took 64.356166ms for postStartSetup
	I1213 11:35:30.550875    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.551059    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:35:30.551072    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.551169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.551256    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.551369    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.551457    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.583546    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:35:30.583619    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:35:30.638958    5233 fix.go:56] duration metric: took 37.546530399s for fixHost
	I1213 11:35:30.638984    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.639131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.639231    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639317    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639400    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.639557    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:30.639690    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:30.639697    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:35:30.691357    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118530.813836388
	
	I1213 11:35:30.691371    5233 fix.go:216] guest clock: 1734118530.813836388
	I1213 11:35:30.691376    5233 fix.go:229] Guest: 2024-12-13 11:35:30.813836388 -0800 PST Remote: 2024-12-13 11:35:30.638973 -0800 PST m=+127.105464891 (delta=174.863388ms)
	I1213 11:35:30.691387    5233 fix.go:200] guest clock delta is within tolerance: 174.863388ms
	I1213 11:35:30.691390    5233 start.go:83] releasing machines lock for "ha-224000-m03", held for 37.598987831s
	I1213 11:35:30.691409    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.691545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:30.716697    5233 out.go:177] * Found network options:
	I1213 11:35:30.736372    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7
	W1213 11:35:30.757863    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.757920    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.757939    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.758810    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759058    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759249    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:30.759286    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	W1213 11:35:30.759290    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.759313    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.759449    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:35:30.759471    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.759537    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759655    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759708    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759905    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759938    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760152    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.760321    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	W1213 11:35:30.790341    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:30.790425    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:30.835439    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:35:30.835453    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:30.835523    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.850635    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:35:30.858947    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:35:30.867636    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.867708    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:35:30.876811    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.885325    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:35:30.893786    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.902226    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.910790    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:35:30.919236    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:35:30.927803    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:35:30.936377    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.943894    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:35:30.943955    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:35:30.952569    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.959891    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061578    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:35:31.081433    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:31.081517    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:35:31.100335    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.112429    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:31.127499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.138533    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.148917    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:35:31.174782    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.184889    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:31.201805    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:35:31.204856    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:35:31.212060    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:35:31.225973    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:35:31.326706    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:35:31.431909    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:35:31.431936    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:35:31.446011    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.546239    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:35:33.884526    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338279376s)
	I1213 11:35:33.884605    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:35:33.896180    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:33.907512    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:35:34.018152    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:35:34.117342    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.216289    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:35:34.229723    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:34.241050    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.333405    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:35:34.400848    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:35:34.400950    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:35:34.406614    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:35:34.406682    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:35:34.409985    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:35:34.437608    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:35:34.437696    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.456769    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.499545    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:35:34.556752    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:35:34.577782    5233 out.go:177]   - env NO_PROXY=192.169.0.6,192.169.0.7
	I1213 11:35:34.598561    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:34.598902    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:34.602518    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:34.612856    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:35:34.613037    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:34.613269    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.613292    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.625281    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51891
	I1213 11:35:34.625655    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.626009    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.626025    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.626248    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.626340    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:35:34.626428    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:35:34.626490    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:35:34.627676    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:35:34.627955    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.627988    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.640060    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51893
	I1213 11:35:34.640392    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.640716    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.640735    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.640975    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.641081    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:35:34.641190    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.8
	I1213 11:35:34.641199    5233 certs.go:194] generating shared ca certs ...
	I1213 11:35:34.641214    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:34.641369    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:35:34.641440    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:35:34.641449    5233 certs.go:256] generating profile certs ...
	I1213 11:35:34.641547    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:35:34.641650    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.f4268d28
	I1213 11:35:34.641704    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:35:34.641711    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:35:34.641732    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:35:34.641753    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:35:34.641772    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:35:34.641790    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:35:34.641809    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:35:34.641828    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:35:34.641845    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:35:34.641926    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:35:34.641977    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:34.641992    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:34.642032    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:35:34.642067    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:34.642096    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:34.642163    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:34.642196    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:34.642223    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:35:34.642243    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:35:34.642269    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:35:34.642361    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:35:34.642463    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:35:34.642554    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:35:34.642635    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:35:34.669703    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:35:34.673030    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:35:34.682641    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:35:34.686133    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:35:34.695208    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:35:34.698292    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:35:34.708147    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:35:34.711343    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:35:34.720522    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:35:34.723933    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:35:34.733200    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:35:34.736904    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:35:34.748040    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:34.768078    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:35:34.787823    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:34.807347    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:34.827367    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:35:34.847452    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:35:34.866717    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:34.886226    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:34.905392    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:34.924502    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:35:34.944848    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:35:34.964162    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:35:34.977883    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:35:34.991483    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:35:35.005083    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:35:35.018833    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:35:35.033559    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:35:35.047330    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:35:35.060953    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:35:35.065093    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:35:35.074224    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077601    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077646    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.081873    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:35:35.091167    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:35:35.100351    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103730    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103786    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.107944    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:35:35.116996    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:35:35.126132    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129577    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129642    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.133859    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:35:35.143102    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:35.146630    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:35.150908    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:35.155104    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:35.159301    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:35.163626    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:35.167845    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:35.172217    5233 kubeadm.go:934] updating node {m03 192.169.0.8 8443 v1.31.2 docker true true} ...
	I1213 11:35:35.172277    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:35.172296    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:35:35.172356    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:35:35.190873    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:35:35.190925    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:35:35.191004    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:35:35.201615    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:35:35.201692    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:35:35.209907    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:35:35.223540    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:35.237211    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:35:35.251084    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:35.254255    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:35.264617    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.363941    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.379515    5233 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:35:35.379713    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:35.453014    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:35:35.489942    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.641418    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.655240    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:35:35.655455    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:35:35.655497    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:35:35.655667    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:35.655710    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:35.655716    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:35.655722    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:35.655726    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:35.658541    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.157140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:36.157157    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.157163    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.157167    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.159862    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.160261    5233 node_ready.go:49] node "ha-224000-m03" has status "Ready":"True"
	I1213 11:35:36.160270    5233 node_ready.go:38] duration metric: took 504.598087ms for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:36.160277    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:35:36.160322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:35:36.160332    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.160339    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.160345    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.164741    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:35:36.170442    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:36.170504    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.170510    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.170516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.170519    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.172921    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.173369    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.173377    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.173383    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.173390    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.175266    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:36.671483    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.671501    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.671508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.671513    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.674268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.675049    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.675058    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.675065    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.675069    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.678278    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:37.170684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.170697    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.170703    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.170706    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.173103    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.173639    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.173649    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.173659    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.173663    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.175563    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:37.670841    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.670859    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.670867    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.670870    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.673709    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.674599    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.674609    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.674616    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.674619    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.677468    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.171983    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.172002    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.172010    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.172014    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.174562    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.175168    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.175176    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.175183    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.175186    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.177058    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:38.177428    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:38.671814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.671831    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.671839    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.671843    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.674211    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.674978    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.674987    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.674994    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.675005    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.677077    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.171353    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.171371    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.171379    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.171383    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.173885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.174765    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.174780    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.174787    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.174791    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.176969    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.672084    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.672101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.672107    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.672111    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.674182    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.674701    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.674709    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.674715    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.674719    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.676491    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.170778    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.170793    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.170801    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.170805    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.172716    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.173201    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.173209    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.173215    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.173218    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.174782    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.670537    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.670554    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.670561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.670564    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.672905    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:40.673371    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.673378    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.673384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.673388    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.675334    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.675698    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:41.170540    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.170555    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.170561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.170565    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.172610    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:41.173071    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.173079    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.173086    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.173090    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.174669    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.670954    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.670970    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.670977    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.670980    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.672906    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.673327    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.673335    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.673341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.673346    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.674840    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.171591    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.171607    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.171614    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.171626    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.173848    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.174323    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.174331    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.174336    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.174339    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.176072    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.670670    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.670685    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.670691    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.670695    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.672916    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.673334    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.673342    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.673348    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.673352    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.674953    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.171018    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.171035    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.171041    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.171044    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.173500    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.173933    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.173942    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.173948    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.173952    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.175797    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.176282    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:43.671883    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.671900    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.671909    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.671914    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.674489    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.674937    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.674945    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.674952    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.674959    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.676652    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.171731    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.171747    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.171754    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.171757    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174220    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:44.174839    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.174847    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.174853    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174858    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.176592    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.671463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.671523    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.671535    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.671543    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.674700    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:44.675156    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.675163    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.675169    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.675172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.676845    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:45.170845    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.170871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.170883    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.170890    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.174136    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:45.174847    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.174855    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.174861    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.174865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.177051    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.177329    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:45.671539    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.671565    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.671577    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.671584    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.674504    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.674930    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.674937    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.674944    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.674948    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.676902    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.171017    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.171043    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.171055    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.171064    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.174349    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:46.175105    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.175113    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.175119    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.175123    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.176671    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.670718    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.670742    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.670753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.670760    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.673727    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:46.674143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.674150    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.674155    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.674159    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.675697    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.171141    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.171167    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.171181    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.171188    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.174674    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:47.175237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.175248    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.175256    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.175283    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.177291    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.177630    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:47.670502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.670539    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.670550    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.670555    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.673105    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:47.673592    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.673603    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.673624    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.673631    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.675150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.170714    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.170743    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.170753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.170759    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.174068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:48.174871    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.174879    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.174885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.174888    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.176423    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.671508    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.671547    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.671558    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.671563    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.673769    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:48.674261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.674268    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.674274    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.674276    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.676263    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.170991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.171006    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.171015    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.171020    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173356    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.173868    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.173876    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.173882    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173893    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.175974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.671308    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.671349    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.671359    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.671375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.674049    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.674657    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.674666    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.674672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.674676    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.676408    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.676866    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:50.170526    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.170546    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.170555    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.170560    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.172951    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.173418    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.173454    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.173462    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.173467    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.175187    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:50.671268    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.671306    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.671315    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.671319    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.673518    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.674124    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.674132    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.674139    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.674142    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.675972    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.172292    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.172318    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.172329    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.172335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.175388    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:51.176242    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.176250    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.176255    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.176271    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.178034    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.672241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.672259    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.672268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.672273    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.674716    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:51.675171    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.675178    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.675184    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.675187    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.677031    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.677333    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:52.171324    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.171350    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.171394    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.171403    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.174624    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:52.175339    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.175347    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.175353    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.175356    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.176912    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.672143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.672156    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.672163    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.672166    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674142    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.674648    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.674656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.674662    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674665    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.676343    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.171789    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.171834    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.171845    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.171850    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.173997    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.174633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.174641    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.174647    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.174652    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.176489    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.671689    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.671702    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.671708    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.674629    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.675317    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.675324    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.675330    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.675335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.677039    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.677545    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:54.172269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.172296    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.172309    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.172316    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.175190    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:54.175863    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.175871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.175880    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.175884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.177695    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:54.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.671656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.671679    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.671687    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.674858    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:54.675633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.675644    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.675652    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.675659    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.677622    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.172159    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.172183    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.172195    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.172200    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.175352    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.175951    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.175961    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.175969    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.175974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.177826    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.672525    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.672548    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.672561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.672568    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676200    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.676655    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.676663    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.676669    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.679603    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.680007    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.680026    5233 pod_ready.go:82] duration metric: took 19.509731372s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680040    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680088    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:35:55.680094    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.680100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.680104    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.682544    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.683008    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.683017    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.683023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.683027    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.684867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.685203    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.685212    5233 pod_ready.go:82] duration metric: took 5.165234ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685222    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685259    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:35:55.685264    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.685270    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.685274    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.687013    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.687444    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.687452    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.687458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.687463    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689192    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.689502    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.689510    5233 pod_ready.go:82] duration metric: took 4.282723ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689517    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689546    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:35:55.689551    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.689557    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691520    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.691918    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:55.691926    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.691932    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691935    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.693585    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.694009    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.694017    5233 pod_ready.go:82] duration metric: took 4.494586ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694023    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694061    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:35:55.694066    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.694071    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.694074    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.696047    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.696583    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:55.696591    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.696597    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.696602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.698695    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.699182    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.699191    5233 pod_ready.go:82] duration metric: took 5.162024ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.699204    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.873308    5233 request.go:632] Waited for 174.059147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873409    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.873420    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.873432    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.877057    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.073941    5233 request.go:632] Waited for 196.465756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073990    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073998    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.074007    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.074015    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.076268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.076663    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.076673    5233 pod_ready.go:82] duration metric: took 377.466982ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.076681    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.272907    5233 request.go:632] Waited for 196.189621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272950    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272958    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.272967    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.272973    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.275118    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.473781    5233 request.go:632] Waited for 198.215756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.473825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.473834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.476052    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.476328    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.476337    5233 pod_ready.go:82] duration metric: took 399.655338ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.476344    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.672963    5233 request.go:632] Waited for 196.573548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673025    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673042    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.673069    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.673082    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.676053    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.874041    5233 request.go:632] Waited for 197.242072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874093    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.874112    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.874148    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.877393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.877917    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.877925    5233 pod_ready.go:82] duration metric: took 401.579167ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.877932    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.072677    5233 request.go:632] Waited for 194.687466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072807    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.072829    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.072837    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.076583    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:57.273280    5233 request.go:632] Waited for 195.960523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273356    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273364    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.273372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.273377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.275590    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:57.275864    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.275873    5233 pod_ready.go:82] duration metric: took 397.938639ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.275887    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.473240    5233 request.go:632] Waited for 197.314418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473276    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473282    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.473288    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.473293    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.479318    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:35:57.672800    5233 request.go:632] Waited for 192.751323ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672854    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672865    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.672879    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.672883    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.674679    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:57.674953    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.674964    5233 pod_ready.go:82] duration metric: took 399.075588ms for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.674971    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.872629    5233 request.go:632] Waited for 197.615913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872690    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.872698    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.872704    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.875523    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.072684    5233 request.go:632] Waited for 196.666527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072801    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072814    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.072825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.072835    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.076186    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.076572    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.076584    5233 pod_ready.go:82] duration metric: took 401.611001ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.076594    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.272566    5233 request.go:632] Waited for 195.927789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272623    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272631    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.272639    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.272646    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.275090    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.473816    5233 request.go:632] Waited for 198.141217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473894    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473905    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.473916    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.473922    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.476808    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.477275    5233 pod_ready.go:98] node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477286    5233 pod_ready.go:82] duration metric: took 400.69023ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	E1213 11:35:58.477294    5233 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477302    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.672582    5233 request.go:632] Waited for 195.231932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672629    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672638    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.672649    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.672657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.676219    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.873974    5233 request.go:632] Waited for 197.337714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874026    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874034    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.874045    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.874051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.877592    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.877988    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.878000    5233 pod_ready.go:82] duration metric: took 400.696273ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.878009    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.073381    5233 request.go:632] Waited for 195.314343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073433    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073441    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.073449    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.073455    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.075792    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.273216    5233 request.go:632] Waited for 196.949491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273267    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273283    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.273292    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.273298    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.275702    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.276247    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.276258    5233 pod_ready.go:82] duration metric: took 398.245999ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.276265    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.473693    5233 request.go:632] Waited for 197.370074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473831    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473842    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.473854    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.473862    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.477420    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.672646    5233 request.go:632] Waited for 194.659895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672759    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672771    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.672784    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.672794    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.676016    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.676434    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.676444    5233 pod_ready.go:82] duration metric: took 400.177932ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.676451    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.873284    5233 request.go:632] Waited for 196.790328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873409    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873424    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.873437    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.873446    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.876647    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.072905    5233 request.go:632] Waited for 195.872865ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073011    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073019    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.073028    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.073032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.076068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.076488    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.076498    5233 pod_ready.go:82] duration metric: took 400.046456ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.076506    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.273249    5233 request.go:632] Waited for 196.676645ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273361    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273380    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.273405    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.273414    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.276870    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.473222    5233 request.go:632] Waited for 195.664041ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473283    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473291    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.473300    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.473304    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.475794    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:36:00.476078    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.476087    5233 pod_ready.go:82] duration metric: took 399.579687ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.476096    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.674009    5233 request.go:632] Waited for 197.794547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674081    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674092    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.674106    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.674121    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.677780    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.873417    5233 request.go:632] Waited for 194.907567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873476    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873488    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.873500    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.873508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.876715    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.877199    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.877213    5233 pod_ready.go:82] duration metric: took 401.11429ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.877234    5233 pod_ready.go:39] duration metric: took 24.717168247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:36:00.877249    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:36:00.877335    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:00.889500    5233 api_server.go:72] duration metric: took 25.510179125s to wait for apiserver process to appear ...
	I1213 11:36:00.889514    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:36:00.889525    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:36:00.892661    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:36:00.892694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:36:00.892700    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.892706    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.892710    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.893221    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:36:00.893255    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:36:00.893263    5233 api_server.go:131] duration metric: took 3.744726ms to wait for apiserver health ...
	I1213 11:36:00.893268    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:36:01.073160    5233 request.go:632] Waited for 179.837088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073311    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073322    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.073333    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.073340    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.081092    5233 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1213 11:36:01.086508    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:36:01.086526    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.086530    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.086533    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.086543    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.086547    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.086550    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.086553    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.086555    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.086559    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.086565    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.086569    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.086572    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.086575    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.086579    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.086582    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.086585    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.086588    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.086591    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.086593    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.086596    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.086600    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.086602    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.086606    5233 system_pods.go:61] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.086609    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.086612    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.086616    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.086622    5233 system_pods.go:74] duration metric: took 193.351906ms to wait for pod list to return data ...
	I1213 11:36:01.086629    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:36:01.272667    5233 request.go:632] Waited for 185.987795ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272763    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272774    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.272785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.272793    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.276315    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.276400    5233 default_sa.go:45] found service account: "default"
	I1213 11:36:01.276412    5233 default_sa.go:55] duration metric: took 189.780655ms for default service account to be created ...
	I1213 11:36:01.276419    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:36:01.473526    5233 request.go:632] Waited for 197.034094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473601    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473653    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.473672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.473680    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.479025    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:36:01.484476    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:36:01.484489    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.484495    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.484499    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.484502    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.484506    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.484508    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.484511    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.484516    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.484518    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.484522    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.484524    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.484527    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.484531    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.484534    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.484538    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.484540    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.484543    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.484546    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.484549    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.484552    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.484555    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.484558    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.484561    5233 system_pods.go:89] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.484563    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.484567    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.484571    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.484576    5233 system_pods.go:126] duration metric: took 208.153776ms to wait for k8s-apps to be running ...
	I1213 11:36:01.484587    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:36:01.484655    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:36:01.495689    5233 system_svc.go:56] duration metric: took 11.101939ms WaitForService to wait for kubelet
	I1213 11:36:01.495712    5233 kubeadm.go:582] duration metric: took 26.116392116s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:36:01.495725    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:36:01.673624    5233 request.go:632] Waited for 177.853394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673726    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673737    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.673747    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.673785    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.677584    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.678344    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678354    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678360    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678364    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678367    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678369    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678372    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678375    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678378    5233 node_conditions.go:105] duration metric: took 182.650917ms to run NodePressure ...
	I1213 11:36:01.678389    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:36:01.678404    5233 start.go:255] writing updated cluster config ...
	I1213 11:36:01.701519    5233 out.go:201] 
	I1213 11:36:01.755040    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:01.755118    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.792739    5233 out.go:177] * Starting "ha-224000-m04" worker node in "ha-224000" cluster
	I1213 11:36:01.850695    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:36:01.850719    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:36:01.850830    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:36:01.850840    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:36:01.850919    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.851367    5233 start.go:360] acquireMachinesLock for ha-224000-m04: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:36:01.851417    5233 start.go:364] duration metric: took 38.664µs to acquireMachinesLock for "ha-224000-m04"
	I1213 11:36:01.851430    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:36:01.851435    5233 fix.go:54] fixHost starting: m04
	I1213 11:36:01.851670    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:36:01.851689    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:36:01.863548    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I1213 11:36:01.863864    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:36:01.864237    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:36:01.864251    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:36:01.864489    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:36:01.864595    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.864718    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:36:01.864801    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.864873    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 4360
	I1213 11:36:01.866047    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid 4360 missing from process table
	I1213 11:36:01.866070    5233 fix.go:112] recreateIfNeeded on ha-224000-m04: state=Stopped err=<nil>
	I1213 11:36:01.866083    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	W1213 11:36:01.866170    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:36:01.886701    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m04" ...
	I1213 11:36:01.927945    5233 main.go:141] libmachine: (ha-224000-m04) Calling .Start
	I1213 11:36:01.928215    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.928249    5233 main.go:141] libmachine: (ha-224000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid
	I1213 11:36:01.928315    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Using UUID 3aa2edb2-289d-46e2-9534-1f9a2dff1012
	I1213 11:36:01.954122    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Generated MAC e2:d2:09:69:a8:b4
	I1213 11:36:01.954144    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:36:01.954348    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954378    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954426    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3aa2edb2-289d-46e2-9534-1f9a2dff1012", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:36:01.954465    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3aa2edb2-289d-46e2-9534-1f9a2dff1012 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:36:01.954478    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:36:01.956069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Pid is 5375
	I1213 11:36:01.956512    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Attempt 0
	I1213 11:36:01.956527    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.956630    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:36:01.959334    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Searching for e2:d2:09:69:a8:b4 in /var/db/dhcpd_leases ...
	I1213 11:36:01.959473    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:36:01.959490    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c9a76}
	I1213 11:36:01.959506    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:36:01.959522    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:36:01.959533    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:36:01.959548    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found match: e2:d2:09:69:a8:b4
	I1213 11:36:01.959568    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetConfigRaw
	I1213 11:36:01.959573    5233 main.go:141] libmachine: (ha-224000-m04) DBG | IP: 192.169.0.9
	I1213 11:36:01.960365    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:01.960553    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.960997    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:36:01.961019    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.961190    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:01.961347    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:01.961451    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961542    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961646    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:01.961799    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:01.961972    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:01.961979    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:36:01.968096    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:36:01.976979    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:36:01.978042    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:01.978064    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:01.978076    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:01.978087    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.370264    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:36:02.370282    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:36:02.485027    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:02.485059    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:02.485069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:02.485077    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.485882    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:36:02.485893    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:36:08.339296    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:36:08.339331    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:36:08.339343    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:36:08.362659    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:36:37.019941    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:36:37.019956    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020079    5233 buildroot.go:166] provisioning hostname "ha-224000-m04"
	I1213 11:36:37.020091    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020181    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.020268    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.020362    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020446    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020550    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.020691    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.020850    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.020859    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m04 && echo "ha-224000-m04" | sudo tee /etc/hostname
	I1213 11:36:37.079455    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m04
	
	I1213 11:36:37.079470    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.079611    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.079712    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079807    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079899    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.080050    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.080202    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.080213    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:36:37.138441    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:36:37.138458    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:36:37.138471    5233 buildroot.go:174] setting up certificates
	I1213 11:36:37.138478    5233 provision.go:84] configureAuth start
	I1213 11:36:37.138489    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.138635    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:37.138758    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.138874    5233 provision.go:143] copyHostCerts
	I1213 11:36:37.138906    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.138980    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:36:37.138987    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.139126    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:36:37.139340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139389    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:36:37.139394    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139490    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:36:37.139651    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139700    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:36:37.139705    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139785    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:36:37.139956    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m04 san=[127.0.0.1 192.169.0.9 ha-224000-m04 localhost minikube]
	I1213 11:36:37.316710    5233 provision.go:177] copyRemoteCerts
	I1213 11:36:37.316783    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:36:37.316812    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.316958    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.317051    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.317152    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.317246    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:37.347920    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:36:37.347992    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:36:37.367331    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:36:37.367418    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:36:37.387377    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:36:37.387449    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:36:37.407116    5233 provision.go:87] duration metric: took 268.631983ms to configureAuth
	I1213 11:36:37.407131    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:36:37.407332    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:37.407364    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:37.407494    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.407580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.407680    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407756    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407841    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.407978    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.408110    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.408119    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:36:37.455460    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:36:37.455475    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:36:37.455568    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:36:37.455579    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.455716    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.455822    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.455928    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.456017    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.456183    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.456322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.456371    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:36:37.514210    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	Environment=NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:36:37.514229    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.514369    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.514460    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514608    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514700    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.514873    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.515015    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.515027    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:36:39.106697    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:36:39.106713    5233 machine.go:96] duration metric: took 37.146099544s to provisionDockerMachine
	I1213 11:36:39.106722    5233 start.go:293] postStartSetup for "ha-224000-m04" (driver="hyperkit")
	I1213 11:36:39.106729    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:36:39.106741    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.106958    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:36:39.106972    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.107076    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.107171    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.107250    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.107377    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.137664    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:36:39.140876    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:36:39.140886    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:36:39.140989    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:36:39.141205    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:36:39.141216    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:36:39.141482    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:36:39.148686    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:36:39.168356    5233 start.go:296] duration metric: took 61.625015ms for postStartSetup
	I1213 11:36:39.168377    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.168566    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:36:39.168580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.168694    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.168784    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.168873    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.168955    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.200288    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:36:39.200368    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:36:39.252642    5233 fix.go:56] duration metric: took 37.401602513s for fixHost
	I1213 11:36:39.252667    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.252828    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.252931    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253035    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253138    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.253294    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:39.253427    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:39.253435    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:36:39.303241    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118599.429050956
	
	I1213 11:36:39.303262    5233 fix.go:216] guest clock: 1734118599.429050956
	I1213 11:36:39.303272    5233 fix.go:229] Guest: 2024-12-13 11:36:39.429050956 -0800 PST Remote: 2024-12-13 11:36:39.252657 -0800 PST m=+195.719809020 (delta=176.393956ms)
	I1213 11:36:39.303284    5233 fix.go:200] guest clock delta is within tolerance: 176.393956ms
	I1213 11:36:39.303287    5233 start.go:83] releasing machines lock for "ha-224000-m04", held for 37.452264193s
	I1213 11:36:39.303304    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.303439    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:39.324718    5233 out.go:177] * Found network options:
	I1213 11:36:39.345593    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	W1213 11:36:39.367406    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367428    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367438    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.367453    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367872    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367964    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.368045    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:36:39.368067    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	W1213 11:36:39.368071    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368083    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368091    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.368153    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:36:39.368162    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368165    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.368280    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368311    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368396    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368417    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368502    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368516    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.368581    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	W1213 11:36:39.395349    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:36:39.395429    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:36:39.444914    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:36:39.444929    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.445000    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.460519    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:36:39.468747    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:36:39.476970    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:36:39.477028    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:36:39.485250    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.493728    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:36:39.501920    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.510067    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:36:39.518621    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:36:39.527064    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:36:39.535503    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:36:39.544105    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:36:39.551996    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:36:39.552057    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:36:39.560903    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:36:39.569057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:39.663026    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:36:39.681615    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.681707    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:36:39.701692    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.713515    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:36:39.733157    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.744420    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.755241    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:36:39.778169    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.788619    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.803742    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:36:39.806753    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:36:39.814222    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:36:39.828173    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:36:39.923220    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:36:40.025879    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:36:40.025908    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:36:40.040057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:40.139577    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:37:41.169349    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.030424073s)
	I1213 11:37:41.169444    5233 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 11:37:41.204399    5233 out.go:201] 
	W1213 11:37:41.225442    5233 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 13 19:36:37 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427068027Z" level=info msg="Starting up"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427760840Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.428340753Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.446225003Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461418150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461538159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461607016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461644040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461775643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461826393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461966604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462007624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462040126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462069720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462182838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462429601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464011795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464067757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464257837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464302280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464410649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464463860Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465390367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465443699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465555213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465597957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465634744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465705067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465941498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466071120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466113283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466145023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466176156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466211240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466250495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466285590Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466317193Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466347259Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466376937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466407325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466446395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466488362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466530329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466566314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466607503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466641823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466672212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466702609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466732812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466764575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466794248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466823748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466854140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466886668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466935305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466981167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467011716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467066705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467101883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467131499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467160087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467188157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467216598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467244211Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467402488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467606858Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467674178Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467711081Z" level=info msg="containerd successfully booted in 0.022287s"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.455600290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.476104344Z" level=info msg="Loading containers: start."
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.568941234Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.144331314Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.199597389Z" level=info msg="Loading containers: done."
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210939061Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210976128Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210994749Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.211089971Z" level=info msg="Daemon has completed initialization"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231136019Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 19:36:39 ha-224000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231344731Z" level=info msg="API listen on [::]:2376"
	Dec 13 19:36:40 ha-224000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.277223387Z" level=info msg="Processing signal 'terminated'"
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278137307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278251358Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278340377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278256739Z" level=info msg="Daemon shutdown complete"
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:41 ha-224000-m04 dockerd[1113]: time="2024-12-13T19:36:41.322763293Z" level=info msg="Starting up"
	Dec 13 19:37:41 ha-224000-m04 dockerd[1113]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 11:37:41.225503    5233 out.go:270] * 
	W1213 11:37:41.226123    5233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:37:41.267588    5233 out.go:201] 
	
	
	==> Docker <==
	Dec 13 19:35:17 ha-224000 dockerd[1176]: time="2024-12-13T19:35:17.296092113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233837137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233911634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233925821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233995450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239334702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239439690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239450304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239575939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.205775306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207076446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207155526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207356928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206616412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206773456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206817690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206899370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457128150Z" level=info msg="shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1170]: time="2024-12-13T19:35:57.457607034Z" level=info msg="ignoring event" container=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457838474Z" level=warning msg="cleaning up after shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457953841Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213145624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213212633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213225596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213337090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b961eac98708b       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   93cd09024c535       storage-provisioner
	f1b285481948b       50415e5d05f05                                                                                         2 minutes ago        Running             kindnet-cni               1                   06f29a39c508a       kindnet-687js
	38ee6f8374b04       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   6ed2d05ea2409       busybox-7dff88458-wbknx
	5f565c400b733       505d571f5fd56                                                                                         2 minutes ago        Running             kube-proxy                1                   31cf2effc73d7       kube-proxy-9wj7k
	5050cecf942e2       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   645aca2ea936b       coredns-7c65d6cfc9-5ds6r
	df8ddf72aa14f       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   8cef794a507b6       coredns-7c65d6cfc9-sswfx
	dba699a298586       0486b6c53a1b5                                                                                         3 minutes ago        Running             kube-controller-manager   2                   da5d4e126c370       kube-controller-manager-ha-224000
	2c7e84811a057       9499c9960544e                                                                                         3 minutes ago        Running             kube-apiserver            2                   6651a1d0a89d4       kube-apiserver-ha-224000
	d34c8e7a98686       f1c87c24be687                                                                                         4 minutes ago        Running             kube-vip                  0                   53478f9b98c3e       kube-vip-ha-224000
	0457a6eb9fce4       9499c9960544e                                                                                         4 minutes ago        Exited              kube-apiserver            1                   6651a1d0a89d4       kube-apiserver-ha-224000
	78030050b83d7       2e96e5913fc06                                                                                         4 minutes ago        Running             etcd                      1                   48f05aec7d5f4       etcd-ha-224000
	8cce3a8cb1260       847c7bc1a5418                                                                                         4 minutes ago        Running             kube-scheduler            1                   d605ad9f8c9f5       kube-scheduler-ha-224000
	dda62d21c5c2f       0486b6c53a1b5                                                                                         4 minutes ago        Exited              kube-controller-manager   1                   da5d4e126c370       kube-controller-manager-ha-224000
	89334114a6e1e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago        Exited              busybox                   0                   ddc328d7180f5       busybox-7dff88458-wbknx
	cf4b333fe5f49       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   f18799b2271c7       coredns-7c65d6cfc9-sswfx
	f16805d6df5d4       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   653774da684e6       coredns-7c65d6cfc9-5ds6r
	532326a9b719a       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              11 minutes ago       Exited              kindnet-cni               0                   989ccdb8aa000       kindnet-687js
	94480a2dd9b5e       505d571f5fd56                                                                                         11 minutes ago       Exited              kube-proxy                0                   1cd5ef5ffe1e4       kube-proxy-9wj7k
	ad0dc00c3676d       2e96e5913fc06                                                                                         11 minutes ago       Exited              etcd                      0                   6121511eb160b       etcd-ha-224000
	63c39e011231f       847c7bc1a5418                                                                                         11 minutes ago       Exited              kube-scheduler            0                   2046a92fb05bb       kube-scheduler-ha-224000
	
	
	==> coredns [5050cecf942e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:39218 - 50752 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001691935s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:35345->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41938 - 7905 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001636827s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:38380->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41437 - 45110 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.001832207s
	[INFO] 127.0.0.1:44515 - 54662 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 4.002458371s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:41265->192.169.0.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[446765318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30005ms):
	Trace[446765318]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[446765318]: [30.005577524s] [30.005577524s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[393764073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30006ms):
	Trace[393764073]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[393764073]: [30.006232941s] [30.006232941s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[531717446]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.543) (total time: 30002ms):
	Trace[531717446]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:35:47.544)
	Trace[531717446]: [30.002274294s] [30.002274294s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [cf4b333fe5f4] <==
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320449s
	[INFO] 10.244.2.2:56489 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.010940453s
	[INFO] 10.244.2.2:53656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010500029s
	[INFO] 10.244.1.2:40275 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235614s
	[INFO] 10.244.0.4:54501 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000070742s
	[INFO] 10.244.2.2:54661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099137s
	[INFO] 10.244.2.2:53526 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010894436s
	[INFO] 10.244.2.2:43837 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093129s
	[INFO] 10.244.2.2:48144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01305588s
	[INFO] 10.244.2.2:37929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083719s
	[INFO] 10.244.2.2:56915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109123s
	[INFO] 10.244.2.2:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064664s
	[INFO] 10.244.1.2:36673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000091432s
	[INFO] 10.244.1.2:34220 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009472s
	[INFO] 10.244.1.2:38397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007902s
	[INFO] 10.244.0.4:44003 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000090711s
	[INFO] 10.244.0.4:37919 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060032s
	[INFO] 10.244.0.4:57710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104441s
	[INFO] 10.244.2.2:36812 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000142147s
	[INFO] 10.244.1.2:43077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013892s
	[INFO] 10.244.0.4:44480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107424s
	[INFO] 10.244.0.4:50392 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013146s
	[INFO] 10.244.0.4:57954 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090837s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df8ddf72aa14] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35560 - 57542 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003265442s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:57849->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:36876 - 8169 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 2.001203837s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:33115->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:55518 - 55981 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003381935s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:35637->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:51113 - 20297 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.000906393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469351415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30002ms):
	Trace[469351415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:35:47.541)
	Trace[469351415]: [30.002900538s] [30.002900538s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[235804559]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30004ms):
	Trace[235804559]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:35:47.543)
	Trace[235804559]: [30.004014569s] [30.004014569s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222840766]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.542) (total time: 30002ms):
	Trace[222840766]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:35:47.544)
	Trace[222840766]: [30.002499147s] [30.002499147s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f16805d6df5d] <==
	[INFO] 10.244.0.4:50423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616257s
	[INFO] 10.244.0.4:51571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066308s
	[INFO] 10.244.0.4:55425 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000034221s
	[INFO] 10.244.0.4:33674 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091937s
	[INFO] 10.244.0.4:60931 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037068s
	[INFO] 10.244.2.2:51638 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103452s
	[INFO] 10.244.2.2:33033 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088733s
	[INFO] 10.244.2.2:51032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145099s
	[INFO] 10.244.2.2:58035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067066s
	[INFO] 10.244.1.2:35671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137338s
	[INFO] 10.244.1.2:43244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083679s
	[INFO] 10.244.1.2:49096 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008999s
	[INFO] 10.244.1.2:50254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108638s
	[INFO] 10.244.0.4:50170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091228s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158647s
	[INFO] 10.244.0.4:51342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086722s
	[INFO] 10.244.2.2:37837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076855s
	[INFO] 10.244.2.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100477s
	[INFO] 10.244.2.2:48539 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006865s
	[INFO] 10.244.1.2:34571 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102259s
	[INFO] 10.244.1.2:48156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010558s
	[INFO] 10.244.1.2:56382 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000094051s
	[INFO] 10.244.0.4:56589 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000045096s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-224000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T11_26_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:26:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-224000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c482b8662654c3a869b1ecefe5cf9ee
	  System UUID:                b2cf45fe-0000-0000-a947-282a845e5503
	  Boot ID:                    a3b32e80-0a2c-43a6-967b-82a2f6e8eef5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wbknx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 coredns-7c65d6cfc9-5ds6r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-7c65d6cfc9-sswfx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-ha-224000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-687js                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-224000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-224000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9wj7k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-224000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-224000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 2m27s                  kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-224000 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           9m16s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	
	
	Name:               ha-224000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_27_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:27:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-224000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a69af53a722464e92c469155271604e
	  System UUID:                573e4bce-0000-0000-aba3-b379863bb495
	  Boot ID:                    ae7bc928-29f4-4c6b-bd14-f4e659fc8097
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l97s5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 etcd-ha-224000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-c6kgd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-224000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-224000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9wsr4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-224000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-224000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m19s                  kube-proxy       
	  Normal   Starting                 5m15s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           9m16s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 5m19s                  kubelet          Node ha-224000-m02 has been rebooted, boot id: 77378fb8-5f4b-4218-9a14-15ce228529ff
	  Normal   NodeHasSufficientMemory  5m19s                  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m19s                  kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m19s                  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m12s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 3m30s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m29s (x8 over 3m30s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m29s (x8 over 3m30s)  kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m29s (x7 over 3m30s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           2m11s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	
	
	Name:               ha-224000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_31_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:31:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:32:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-224000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e9882ffc62647968bea651d5ce1f097
	  System UUID:                3aa246e2-0000-0000-9534-1f9a2dff1012
	  Boot ID:                    0f3125e8-e3e0-4806-91cb-fd0eaa4f608f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g6ss2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-proxy-7b8ch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m30s (x2 over 6m30s)  kubelet          Node ha-224000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m30s (x2 over 6m30s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m30s (x2 over 6m30s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeReady                6m7s                   kubelet          Node ha-224000-m04 status is now: NodeReady
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeNotReady             2m37s                  node-controller  Node ha-224000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.035991] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008030] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.835151] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.809793] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.216222] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.358309] systemd-fstab-generator[460]: Ignoring "noauto" option for root device
	[  +0.105099] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +1.959406] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +0.254010] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.104125] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +0.104856] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.058611] kauditd_printk_skb: 149 callbacks suppressed
	[  +2.414891] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.103198] systemd-fstab-generator[1400]: Ignoring "noauto" option for root device
	[  +0.113797] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[  +0.119494] systemd-fstab-generator[1427]: Ignoring "noauto" option for root device
	[  +0.429719] systemd-fstab-generator[1587]: Ignoring "noauto" option for root device
	[  +6.882724] kauditd_printk_skb: 172 callbacks suppressed
	[Dec13 19:34] kauditd_printk_skb: 40 callbacks suppressed
	[Dec13 19:35] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.801033] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [78030050b83d] <==
	{"level":"info","ts":"2024-12-13T19:35:36.914506Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.915577Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.968970Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-13T19:35:36.969147Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.970728Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-13T19:35:36.970799Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.630337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e397b3b47bd62ab9 switched to configuration voters=(7605335155526620764 16399774155846068921)"}
	{"level":"info","ts":"2024-12-13T19:37:49.631543Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7182ce703fa4d8d4","local-member-id":"e397b3b47bd62ab9","removed-remote-peer-id":"afd89b9ec393451","removed-remote-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-12-13T19:37:49.631741Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.632018Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.632125Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.631909Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"e397b3b47bd62ab9","removed-member-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.632947Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-12-13T19:37:49.633407Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.633562Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.633738Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.633916Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451","error":"context canceled"}
	{"level":"warn","ts":"2024-12-13T19:37:49.634028Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"afd89b9ec393451","error":"failed to read afd89b9ec393451 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-12-13T19:37:49.634104Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.634388Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451","error":"context canceled"}
	{"level":"info","ts":"2024-12-13T19:37:49.634446Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.634469Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.634519Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"e397b3b47bd62ab9","removed-remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.640548Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e397b3b47bd62ab9","remote-peer-id-stream-handler":"e397b3b47bd62ab9","remote-peer-id-from":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.644460Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e397b3b47bd62ab9","remote-peer-id-stream-handler":"e397b3b47bd62ab9","remote-peer-id-from":"afd89b9ec393451"}
	
	
	==> etcd [ad0dc00c3676] <==
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.919286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"911.52519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-13T19:33:15.919296Z","caller":"traceutil/trace.go:171","msg":"trace[646065576] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"911.536819ms","start":"2024-12-13T19:33:15.007757Z","end":"2024-12-13T19:33:15.919293Z","steps":["trace[646065576] 'agreement among raft nodes before linearized reading'  (duration: 911.525741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:33:15.919307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:33:15.007742Z","time spent":"911.561075ms","remote":"127.0.0.1:57240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.953693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-13T19:33:15.953754Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-13T19:33:15.953797Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"e397b3b47bd62ab9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-13T19:33:15.956144Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956235Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956328Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956354Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956412Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956443Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956450Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956468Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956907Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957016Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.960175Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960341Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960352Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-224000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.6:2380"],"advertise-client-urls":["https://192.169.0.6:2379"]}
	
	
	==> kernel <==
	 19:37:55 up 4 min,  0 users,  load average: 0.54, 0.41, 0.20
	Linux ha-224000 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [532326a9b719] <==
	I1213 19:32:38.955729       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.951745       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:48.951937       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.952237       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:48.952297       1 main.go:301] handling current node
	I1213 19:32:48.952312       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:48.952320       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:32:48.952519       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:48.952573       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.952815       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:58.952836       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.953197       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:58.953257       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:58.953413       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:58.953484       1 main.go:301] handling current node
	I1213 19:32:58.953506       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:58.953519       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.953874       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:33:08.953928       1 main.go:301] handling current node
	I1213 19:33:08.954191       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:33:08.954234       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.955460       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:33:08.955468       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:33:08.955667       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:33:08.955695       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f1b285481948] <==
	I1213 19:37:21.245123       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:21.245378       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:21.245522       1 main.go:301] handling current node
	I1213 19:37:31.243688       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:31.243758       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:31.243918       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:31.244043       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:31.244392       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:31.244432       1 main.go:301] handling current node
	I1213 19:37:31.244443       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:31.244449       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.249106       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:41.249448       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:41.249978       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:41.250111       1 main.go:301] handling current node
	I1213 19:37:41.250163       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:41.250282       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.250439       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:41.250519       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:51.243452       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:51.243568       1 main.go:301] handling current node
	I1213 19:37:51.243598       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:51.243617       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:51.243864       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:51.243930       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0457a6eb9fce] <==
	I1213 19:33:49.820720       1 options.go:228] external host was not specified, using 192.169.0.6
	I1213 19:33:49.826974       1 server.go:142] Version: v1.31.2
	I1213 19:33:49.828876       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.369348       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1213 19:33:50.373560       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:33:50.376229       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1213 19:33:50.376292       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1213 19:33:50.376453       1 instance.go:232] Using reconciler: lease
	W1213 19:34:10.367496       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1213 19:34:10.367678       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1213 19:34:10.377527       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2c7e84811a05] <==
	I1213 19:34:33.858755       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1213 19:34:33.858846       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1213 19:34:33.932383       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1213 19:34:33.934311       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 19:34:33.944721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 19:34:33.944939       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 19:34:33.945156       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:34:33.945214       1 policy_source.go:224] refreshing policies
	I1213 19:34:33.946446       1 shared_informer.go:320] Caches are synced for configmaps
	I1213 19:34:33.950262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 19:34:33.950654       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 19:34:33.952135       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1213 19:34:33.958706       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1213 19:34:33.958952       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1213 19:34:33.959051       1 aggregator.go:171] initial CRD sync complete...
	I1213 19:34:33.959071       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 19:34:33.959175       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 19:34:33.959196       1 cache.go:39] Caches are synced for autoregister controller
	W1213 19:34:33.972653       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1213 19:34:33.974278       1 controller.go:615] quota admission added evaluator for: endpoints
	I1213 19:34:33.985761       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1213 19:34:33.990131       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1213 19:34:34.005835       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 19:34:34.842581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1213 19:34:35.103753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	
	
	==> kube-controller-manager [dba699a29858] <==
	I1213 19:35:37.488940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:35:39.297752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.723659ms"
	I1213 19:35:39.297831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.52µs"
	I1213 19:35:43.044900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:43.142912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m04"
	I1213 19:35:55.552893       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.553121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.725541ms"
	I1213 19:35:55.553280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.548µs"
	I1213 19:35:55.553635       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.571600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.492248ms"
	I1213 19:35:55.576690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.23µs"
	I1213 19:35:55.577745       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.578045       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.625981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="11.797733ms"
	I1213 19:35:55.626922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.294µs"
	I1213 19:37:46.369030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:37:46.381408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:37:46.541953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.510127ms"
	I1213 19:37:46.542674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.239µs"
	I1213 19:37:48.552936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.345µs"
	I1213 19:37:49.216749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.583µs"
	I1213 19:37:49.219502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.65µs"
	I1213 19:37:50.388977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	E1213 19:37:50.419561       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-224000-m03\", UID:\"dbfd547b-46b2-4d01-b5ad-c13202bbbb2d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-224000-m03\", UID:\"5f2128c5-ecb0-4494-b745-3548943f47df\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-224000-m03\" not found" logger="UnhandledError"
	E1213 19:37:50.420034       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-224000-m03\", UID:\"e099dcf0-e130-4edd-882b-188b4e85113b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-224000-m03\", UID:\"5f2128c5-ecb0-4494-b745-3548943f47df\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-224000-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dda62d21c5c2] <==
	I1213 19:33:49.641671       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:33:50.338076       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1213 19:33:50.338108       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.340327       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:33:50.340428       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:33:50.340697       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1213 19:33:50.340882       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:34:11.384884       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.6:8443/healthz\": dial tcp 192.169.0.6:8443: connect: connection refused"
	
	
	==> kube-proxy [5f565c400b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:35:27.545116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:35:27.561280       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:35:27.561547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:35:27.593343       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:35:27.593524       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:35:27.593695       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:35:27.599613       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:35:27.600762       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:35:27.600792       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:35:27.603008       1 config.go:199] "Starting service config controller"
	I1213 19:35:27.603210       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:35:27.603407       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:35:27.603433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:35:27.604612       1 config.go:328] "Starting node config controller"
	I1213 19:35:27.604643       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:35:27.704590       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:35:27.704694       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:35:27.704710       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [94480a2dd9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:26:14.203354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:26:14.213097       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:26:14.213174       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:26:14.241202       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:26:14.241246       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:26:14.241263       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:26:14.244275       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:26:14.244855       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:26:14.244882       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:26:14.246052       1 config.go:199] "Starting service config controller"
	I1213 19:26:14.246200       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:26:14.246348       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:26:14.246374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:26:14.246424       1 config.go:328] "Starting node config controller"
	I1213 19:26:14.246441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:26:14.347309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:26:14.347360       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:26:14.347669       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [63c39e011231] <==
	E1213 19:28:30.473242       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	E1213 19:28:30.474646       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d5770b31-991f-43c2-82a4-f0051e25f645(kube-system/kindnet-kpjh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.474870       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4b9ed970-5ad3-4b15-a714-24f0f06632c8(kube-system/kube-proxy-gmw9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gmw9z"
	E1213 19:28:30.475888       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kpjh5\": pod kindnet-kpjh5 is already assigned to node \"ha-224000-m03\"" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.476671       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-jxwhq"
	I1213 19:28:30.476729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	I1213 19:28:30.475988       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kpjh5" node="ha-224000-m03"
	E1213 19:28:30.475897       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gmw9z\": pod kube-proxy-gmw9z is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-gmw9z"
	I1213 19:28:30.478106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gmw9z" node="ha-224000-m03"
	E1213 19:28:59.957880       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	E1213 19:28:59.957902       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	I1213 19:28:59.957915       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-l97s5" node="ha-224000-m02"
	E1213 19:29:00.063963       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-zs25q is already present in the active queue" pod="default/busybox-7dff88458-zs25q"
	E1213 19:29:00.081842       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-zs25q\" not found" pod="default/busybox-7dff88458-zs25q"
	E1213 19:31:24.582665       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	E1213 19:31:24.582727       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-7b8ch"
	E1213 19:31:24.582830       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8ccp4" node="ha-224000-m04"
	E1213 19:31:24.582939       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-8ccp4"
	E1213 19:31:24.583359       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qqm9r" node="ha-224000-m04"
	E1213 19:31:24.583404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" pod="kube-system/kindnet-qqm9r"
	I1213 19:31:24.586044       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	I1213 19:33:15.853518       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 19:33:15.859188       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:33:15.859357       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1213 19:33:15.864811       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8cce3a8cb126] <==
	E1213 19:34:33.927009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:34:33.927159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:34:33.927384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:34:33.927490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.929630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929845       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 19:34:33.929886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:34:33.930195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 19:34:33.930473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:34:33.930610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:34:33.931026       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1213 19:34:55.098739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1213 19:37:46.507664       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-n5j7r\" not found" pod="default/busybox-7dff88458-n5j7r"
	
	
	==> kubelet <==
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:35:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:35:42 ha-224000 kubelet[1594]: I1213 19:35:42.186925    1594 scope.go:117] "RemoveContainer" containerID="901560cab05afd01ac1f97679993cf515730a563066592c72d364d4f023faa11"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.639988    1594 scope.go:117] "RemoveContainer" containerID="6e865c58301353a95a17f9b7cc0efd9f449785d4fa6d23de4eae2d1f5ef7aa69"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.640662    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: E1213 19:35:57.640842    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: I1213 19:36:09.158547    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: E1213 19:36:09.158675    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: I1213 19:36:20.159152    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: E1213 19:36:20.159302    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: I1213 19:36:31.158111    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: E1213 19:36:31.158349    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.158392    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: E1213 19:36:42.198509    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.216134    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:37:42 ha-224000 kubelet[1594]: E1213 19:37:42.172559    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-224000 -n ha-224000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-224000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-9j5jp
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-224000 describe pod busybox-7dff88458-9j5jp
helpers_test.go:282: (dbg) kubectl --context ha-224000 describe pod busybox-7dff88458-9j5jp:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-9j5jp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-55x6l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-55x6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-224000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-224000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-224000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACo
unt\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-224000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.9\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device
-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimization
s\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-224000 -n ha-224000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 logs -n 25: (3.296097381s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m04 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp testdata/cp-test.txt                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000:/home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000 sudo cat                                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m02:/home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m02 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt                                                                          | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m03:/home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n                                                                                                             | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | ha-224000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-224000 ssh -n ha-224000-m03 sudo cat                                                                                      | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | /home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-224000 node stop m02 -v=7                                                                                                 | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-224000 node start m02 -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000 -v=7                                                                                                       | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-224000 -v=7                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:32 PST | 13 Dec 24 11:33 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-224000 --wait=true -v=7                                                                                                | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:33 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-224000                                                                                                            | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:37 PST |                     |
	| node    | ha-224000 node delete m03 -v=7                                                                                               | ha-224000 | jenkins | v1.34.0 | 13 Dec 24 11:37 PST | 13 Dec 24 11:37 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 11:33:23
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:33:23.556546    5233 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:33:23.556761    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556766    5233 out.go:358] Setting ErrFile to fd 2...
	I1213 11:33:23.556770    5233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:23.556939    5233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:33:23.558493    5233 out.go:352] Setting JSON to false
	I1213 11:33:23.588845    5233 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1973,"bootTime":1734116430,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:33:23.588936    5233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:33:23.610818    5233 out.go:177] * [ha-224000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:33:23.652607    5233 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:33:23.652667    5233 notify.go:220] Checking for updates...
	I1213 11:33:23.695155    5233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:23.716580    5233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:33:23.758076    5233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:33:23.778447    5233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:33:23.799542    5233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:33:23.821105    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:23.821299    5233 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:33:23.821877    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.821927    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:23.834367    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51814
	I1213 11:33:23.834740    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:23.835143    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:23.835152    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:23.835371    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:23.835545    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:23.867473    5233 out.go:177] * Using the hyperkit driver based on existing profile
	I1213 11:33:23.909252    5233 start.go:297] selected driver: hyperkit
	I1213 11:33:23.909282    5233 start.go:901] validating driver "hyperkit" against &{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.909534    5233 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:33:23.909725    5233 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.909981    5233 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:33:23.922579    5233 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:33:23.929434    5233 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:23.929452    5233 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:33:23.935885    5233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:33:23.935924    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:23.935972    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:23.936044    5233 start.go:340] cluster config:
	{Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:23.936181    5233 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:23.978382    5233 out.go:177] * Starting "ha-224000" primary control-plane node in "ha-224000" cluster
	I1213 11:33:23.999338    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:23.999406    5233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 11:33:23.999429    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:23.999602    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:23.999621    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:23.999813    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.000837    5233 start.go:360] acquireMachinesLock for ha-224000: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:24.000950    5233 start.go:364] duration metric: took 87.843µs to acquireMachinesLock for "ha-224000"
	I1213 11:33:24.000984    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:24.001006    5233 fix.go:54] fixHost starting: 
	I1213 11:33:24.001462    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:24.001491    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:24.013395    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I1213 11:33:24.013731    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:24.014113    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:24.014132    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:24.014335    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:24.014453    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.014563    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:33:24.014649    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.014739    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 4112
	I1213 11:33:24.015879    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.015946    5233 fix.go:112] recreateIfNeeded on ha-224000: state=Stopped err=<nil>
	I1213 11:33:24.015971    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	W1213 11:33:24.016061    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:24.037410    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000" ...
	I1213 11:33:24.058353    5233 main.go:141] libmachine: (ha-224000) Calling .Start
	I1213 11:33:24.058516    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.058530    5233 main.go:141] libmachine: (ha-224000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid
	I1213 11:33:24.059997    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 4112 missing from process table
	I1213 11:33:24.060006    5233 main.go:141] libmachine: (ha-224000) DBG | pid 4112 is in state "Stopped"
	I1213 11:33:24.060020    5233 main.go:141] libmachine: (ha-224000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid...
	I1213 11:33:24.060148    5233 main.go:141] libmachine: (ha-224000) DBG | Using UUID b2cf51fb-709d-45fe-a947-282a845e5503
	I1213 11:33:24.195839    5233 main.go:141] libmachine: (ha-224000) DBG | Generated MAC e2:1f:26:f2:db:4d
	I1213 11:33:24.195876    5233 main.go:141] libmachine: (ha-224000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:24.196013    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196037    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b2cf51fb-709d-45fe-a947-282a845e5503", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:24.196083    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b2cf51fb-709d-45fe-a947-282a845e5503", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:24.196130    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b2cf51fb-709d-45fe-a947-282a845e5503 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/ha-224000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:24.196149    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:24.198377    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 DEBUG: hyperkit: Pid is 5248
	I1213 11:33:24.198751    5233 main.go:141] libmachine: (ha-224000) DBG | Attempt 0
	I1213 11:33:24.198766    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:24.198839    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:33:24.200071    5233 main.go:141] libmachine: (ha-224000) DBG | Searching for e2:1f:26:f2:db:4d in /var/db/dhcpd_leases ...
	I1213 11:33:24.200197    5233 main.go:141] libmachine: (ha-224000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:24.200237    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:24.200259    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:24.200275    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:33:24.200287    5233 main.go:141] libmachine: (ha-224000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9849}
	I1213 11:33:24.200302    5233 main.go:141] libmachine: (ha-224000) DBG | Found match: e2:1f:26:f2:db:4d
	I1213 11:33:24.200309    5233 main.go:141] libmachine: (ha-224000) DBG | IP: 192.169.0.6
	I1213 11:33:24.200346    5233 main.go:141] libmachine: (ha-224000) Calling .GetConfigRaw
	I1213 11:33:24.201046    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:24.201273    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:24.201998    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:24.202010    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:24.202152    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:24.202253    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:24.202345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202460    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:24.202575    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:24.202734    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:24.202918    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:24.202926    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:24.209830    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:24.275074    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:24.275977    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.275998    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.276018    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.276028    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.664445    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:24.664462    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:24.779029    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:24.779050    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:24.779061    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:24.779087    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:24.779925    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:24.779935    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:30.509300    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:30.509378    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:30.509389    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:30.535654    5233 main.go:141] libmachine: (ha-224000) DBG | 2024/12/13 11:33:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:33:35.263286    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:33:35.263305    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263484    5233 buildroot.go:166] provisioning hostname "ha-224000"
	I1213 11:33:35.263495    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.263594    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.263690    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.263795    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263879    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.263974    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.264111    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.264249    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.264257    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000 && echo "ha-224000" | sudo tee /etc/hostname
	I1213 11:33:35.330220    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000
	
	I1213 11:33:35.330242    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.330385    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.330487    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330579    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.330683    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.330825    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.330962    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.330973    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:33:35.395347    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:33:35.395367    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:33:35.395380    5233 buildroot.go:174] setting up certificates
	I1213 11:33:35.395390    5233 provision.go:84] configureAuth start
	I1213 11:33:35.395396    5233 main.go:141] libmachine: (ha-224000) Calling .GetMachineName
	I1213 11:33:35.395536    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:35.395626    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.395729    5233 provision.go:143] copyHostCerts
	I1213 11:33:35.395759    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395813    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:33:35.395824    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:33:35.395941    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:33:35.396166    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396198    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:33:35.396203    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:33:35.396305    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:33:35.396479    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396511    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:33:35.396516    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:33:35.396585    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:33:35.396750    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000 san=[127.0.0.1 192.169.0.6 ha-224000 localhost minikube]
	I1213 11:33:35.608012    5233 provision.go:177] copyRemoteCerts
	I1213 11:33:35.608088    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:33:35.608110    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.608273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.608376    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.608484    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.608616    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:35.643782    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:33:35.643849    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:33:35.663504    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:33:35.663563    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 11:33:35.683076    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:33:35.683137    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:33:35.702561    5233 provision.go:87] duration metric: took 307.16247ms to configureAuth
	I1213 11:33:35.702573    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:33:35.702742    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:35.702756    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:35.702886    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.702984    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.703073    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703154    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.703252    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.703383    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.703507    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.703514    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:33:35.761527    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:33:35.761539    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:33:35.761614    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:33:35.761631    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.761761    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.761867    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.761952    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.762029    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.762180    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.762322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.762369    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:33:35.829448    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:33:35.829473    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:35.829611    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:35.829710    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829804    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:35.829882    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:35.830037    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:35.830180    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:35.830192    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:33:37.506714    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:33:37.506731    5233 machine.go:96] duration metric: took 13.304830015s to provisionDockerMachine
	I1213 11:33:37.506744    5233 start.go:293] postStartSetup for "ha-224000" (driver="hyperkit")
	I1213 11:33:37.506752    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:33:37.506763    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.506964    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:33:37.506981    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.507084    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.507184    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.507273    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.507359    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.549053    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:33:37.553822    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:33:37.553837    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:33:37.553928    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:33:37.554104    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:33:37.554111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:33:37.554283    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:33:37.567654    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:37.594179    5233 start.go:296] duration metric: took 87.426295ms for postStartSetup
	I1213 11:33:37.594207    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.594408    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:33:37.594421    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.594508    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.594590    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.594724    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.594816    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.628799    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:33:37.628871    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:33:37.659933    5233 fix.go:56] duration metric: took 13.659041433s for fixHost
	I1213 11:33:37.659954    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.660095    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.660190    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660283    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.660359    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.660499    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:37.660647    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1213 11:33:37.660654    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:33:37.718237    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118417.855687365
	
	I1213 11:33:37.718250    5233 fix.go:216] guest clock: 1734118417.855687365
	I1213 11:33:37.718256    5233 fix.go:229] Guest: 2024-12-13 11:33:37.855687365 -0800 PST Remote: 2024-12-13 11:33:37.659944 -0800 PST m=+14.144143612 (delta=195.743365ms)
	I1213 11:33:37.718279    5233 fix.go:200] guest clock delta is within tolerance: 195.743365ms
	I1213 11:33:37.718284    5233 start.go:83] releasing machines lock for "ha-224000", held for 13.717432141s
	I1213 11:33:37.718302    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718458    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:37.718557    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718855    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.718959    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:33:37.719072    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:33:37.719100    5233 ssh_runner.go:195] Run: cat /version.json
	I1213 11:33:37.719104    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719118    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:33:37.719221    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719232    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:33:37.719345    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719360    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:33:37.719454    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719480    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:33:37.719588    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.719609    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:33:37.801992    5233 ssh_runner.go:195] Run: systemctl --version
	I1213 11:33:37.807211    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:33:37.811454    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:33:37.811510    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:33:37.823724    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:33:37.823735    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:37.823838    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:37.842317    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:33:37.851247    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:33:37.859919    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:33:37.859977    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:33:37.868699    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.877385    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:33:37.885895    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:33:37.894631    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:33:37.903433    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:33:37.912080    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:33:37.920838    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:33:37.929686    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:33:37.937526    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:33:37.937575    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:33:37.946343    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:33:37.954321    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.055814    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:33:38.074538    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:33:38.074638    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:33:38.087031    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.101085    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:33:38.116013    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:38.126951    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.137488    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:33:38.158482    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:33:38.168678    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:38.183844    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:33:38.186730    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:33:38.193926    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:33:38.207186    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:33:38.306381    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:33:38.409182    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:33:38.409284    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:33:38.423485    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:38.520298    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:33:40.856468    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336161165s)
	I1213 11:33:40.856560    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:33:40.867785    5233 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 11:33:40.881291    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:40.891767    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:33:40.985833    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:33:41.094364    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.203166    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:33:41.217499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:33:41.228676    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.322265    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:33:41.392321    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:33:41.392423    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:33:41.396866    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:33:41.396929    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:33:41.400110    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:33:41.428478    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:33:41.428562    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.446343    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:33:41.486067    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:33:41.486118    5233 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:33:41.486570    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:33:41.490428    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.500921    5233 kubeadm.go:883] updating cluster {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:33:41.501009    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:41.501080    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.514302    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.514313    5233 docker.go:619] Images already preloaded, skipping extraction
	I1213 11:33:41.514404    5233 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 11:33:41.528088    5233 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1213 11:33:41.528111    5233 cache_images.go:84] Images are preloaded, skipping loading
	I1213 11:33:41.528123    5233 kubeadm.go:934] updating node { 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1213 11:33:41.528195    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:33:41.528276    5233 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 11:33:41.563286    5233 cni.go:84] Creating CNI manager for ""
	I1213 11:33:41.563301    5233 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1213 11:33:41.563314    5233 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 11:33:41.563331    5233 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.6 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-224000 NodeName:ha-224000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:33:41.563411    5233 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-224000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.6"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.6"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:33:41.563429    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:33:41.563502    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:33:41.577356    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:33:41.577431    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:33:41.577503    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:33:41.586076    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:33:41.586130    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 11:33:41.593693    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1213 11:33:41.607111    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:33:41.620717    5233 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1213 11:33:41.634595    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:33:41.648138    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:33:41.651088    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:41.660611    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:41.764209    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:33:41.776920    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.6
	I1213 11:33:41.776935    5233 certs.go:194] generating shared ca certs ...
	I1213 11:33:41.776947    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.777111    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:33:41.777172    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:33:41.777182    5233 certs.go:256] generating profile certs ...
	I1213 11:33:41.777268    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:33:41.777289    5233 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848
	I1213 11:33:41.777307    5233 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.6 192.169.0.7 192.169.0.8 192.169.0.254]
	I1213 11:33:41.924008    5233 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 ...
	I1213 11:33:41.924024    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848: {Name:mk14c8bdd605a32a15c7e818d08d02d64b9be917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925000    5233 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 ...
	I1213 11:33:41.925011    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848: {Name:mk0673ccf9e28132db2b00d320fea4d73482d286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:41.925290    5233 certs.go:381] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt
	I1213 11:33:41.925479    5233 certs.go:385] copying /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.285db848 -> /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key
	I1213 11:33:41.925688    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:33:41.925697    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:33:41.925721    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:33:41.925741    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:33:41.925761    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:33:41.925780    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:33:41.925802    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:33:41.925823    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:33:41.925841    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:33:41.925928    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:33:41.925965    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:33:41.925979    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:33:41.926013    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:33:41.926042    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:33:41.926077    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:33:41.926146    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:33:41.926184    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:33:41.926207    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:41.926225    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:33:41.927710    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:33:41.951166    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:33:41.975929    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:33:42.015520    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:33:42.051250    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:33:42.097395    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:33:42.139215    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:33:42.167922    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:33:42.188284    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:33:42.207671    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:33:42.226762    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:33:42.245781    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:33:42.259332    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:33:42.263629    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:33:42.272753    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276074    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.276126    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:33:42.280400    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:33:42.289318    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:33:42.298635    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301936    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.301986    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:42.306272    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:33:42.315219    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:33:42.324178    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327536    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.327583    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:33:42.331821    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:33:42.340849    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:33:42.344177    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:33:42.348774    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:33:42.353021    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:33:42.357742    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:33:42.361999    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:33:42.366226    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:33:42.370715    5233 kubeadm.go:392] StartCluster: {Name:ha-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.9 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:42.370839    5233 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 11:33:42.382402    5233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:33:42.390619    5233 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 11:33:42.390630    5233 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 11:33:42.390688    5233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:33:42.399169    5233 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:42.399486    5233 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-224000" does not appear in /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.399573    5233 kubeconfig.go:62] /Users/jenkins/minikube-integration/20090-800/kubeconfig needs updating (will repair): [kubeconfig missing "ha-224000" cluster setting kubeconfig missing "ha-224000" context setting]
	I1213 11:33:42.399754    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.400139    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.400368    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:33:42.400704    5233 cert_rotation.go:140] Starting client certificate rotation controller
	I1213 11:33:42.400887    5233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:33:42.408731    5233 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.6
	I1213 11:33:42.408748    5233 kubeadm.go:597] duration metric: took 18.113581ms to restartPrimaryControlPlane
	I1213 11:33:42.408754    5233 kubeadm.go:394] duration metric: took 38.045507ms to StartCluster
	I1213 11:33:42.408764    5233 settings.go:142] acquiring lock: {Name:mk0626482d1a77203bd9c1b6d841b6780f4771c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.408852    5233 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:33:42.409247    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20090-800/kubeconfig: {Name:mk8eff3a3a3e37d84455f265c7172359004b7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:42.409470    5233 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:33:42.409483    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:33:42.409500    5233 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:33:42.409614    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.452999    5233 out.go:177] * Enabled addons: 
	I1213 11:33:42.473889    5233 addons.go:510] duration metric: took 64.391249ms for enable addons: enabled=[]
	I1213 11:33:42.473995    5233 start.go:246] waiting for cluster config update ...
	I1213 11:33:42.474008    5233 start.go:255] writing updated cluster config ...
	I1213 11:33:42.496132    5233 out.go:201] 
	I1213 11:33:42.517570    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:33:42.517711    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.541038    5233 out.go:177] * Starting "ha-224000-m02" control-plane node in "ha-224000" cluster
	I1213 11:33:42.583131    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:33:42.583188    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:33:42.583372    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:33:42.583392    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:33:42.583516    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.584724    5233 start.go:360] acquireMachinesLock for ha-224000-m02: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:33:42.584832    5233 start.go:364] duration metric: took 83.288µs to acquireMachinesLock for "ha-224000-m02"
	I1213 11:33:42.584859    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:42.584868    5233 fix.go:54] fixHost starting: m02
	I1213 11:33:42.585263    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:33:42.585289    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:33:42.597490    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51838
	I1213 11:33:42.598009    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:33:42.598520    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:33:42.598537    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:33:42.598854    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:33:42.598984    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.599156    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:33:42.599250    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.599342    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5143
	I1213 11:33:42.600521    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.600553    5233 fix.go:112] recreateIfNeeded on ha-224000-m02: state=Stopped err=<nil>
	I1213 11:33:42.600561    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	W1213 11:33:42.600657    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:42.642952    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m02" ...
	I1213 11:33:42.664177    5233 main.go:141] libmachine: (ha-224000-m02) Calling .Start
	I1213 11:33:42.664494    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.664558    5233 main.go:141] libmachine: (ha-224000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid
	I1213 11:33:42.666694    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5143 missing from process table
	I1213 11:33:42.666707    5233 main.go:141] libmachine: (ha-224000-m02) DBG | pid 5143 is in state "Stopped"
	I1213 11:33:42.666723    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid...
	I1213 11:33:42.667115    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Using UUID 573e64b1-a821-4bce-aba3-b379863bb495
	I1213 11:33:42.694947    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Generated MAC fa:54:eb:53:13:e6
	I1213 11:33:42.695001    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:33:42.695241    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695304    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"573e64b1-a821-4bce-aba3-b379863bb495", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000429650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:33:42.695353    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "573e64b1-a821-4bce-aba3-b379863bb495", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:33:42.695424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 573e64b1-a821-4bce-aba3-b379863bb495 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/ha-224000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:33:42.695442    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:33:42.697074    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 DEBUG: hyperkit: Pid is 5263
	I1213 11:33:42.697519    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Attempt 0
	I1213 11:33:42.697548    5233 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:33:42.697612    5233 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:33:42.699596    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Searching for fa:54:eb:53:13:e6 in /var/db/dhcpd_leases ...
	I1213 11:33:42.699713    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:33:42.699733    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:33:42.699753    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:33:42.699767    5233 main.go:141] libmachine: (ha-224000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c99d7}
	I1213 11:33:42.699789    5233 main.go:141] libmachine: (ha-224000-m02) DBG | Found match: fa:54:eb:53:13:e6
	I1213 11:33:42.699807    5233 main.go:141] libmachine: (ha-224000-m02) DBG | IP: 192.169.0.7
	I1213 11:33:42.699845    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetConfigRaw
	I1213 11:33:42.700566    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:33:42.700747    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:33:42.701233    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:33:42.701243    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:33:42.701360    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:33:42.701474    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:33:42.701583    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701690    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:33:42.701786    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:33:42.701932    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:33:42.702072    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:33:42.702079    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:33:42.708424    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:33:42.717944    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:33:42.718853    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:42.718881    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:42.718896    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:42.718909    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.109099    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:33:43.109114    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:33:43.223848    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:33:43.223866    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:33:43.223877    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:33:43.223884    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:33:43.224755    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:33:43.224765    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:33:48.997042    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:33:48.997098    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:33:48.997108    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:33:49.020830    5233 main.go:141] libmachine: (ha-224000-m02) DBG | 2024/12/13 11:33:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:34:17.779287    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:34:17.779302    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779433    5233 buildroot.go:166] provisioning hostname "ha-224000-m02"
	I1213 11:34:17.779441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.779556    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.779664    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.779746    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779835    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.779942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.780083    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.780222    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.780230    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m02 && echo "ha-224000-m02" | sudo tee /etc/hostname
	I1213 11:34:17.853511    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m02
	
	I1213 11:34:17.853529    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.853672    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:17.853764    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853853    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:17.853936    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:17.854073    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:17.854254    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:17.854268    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:34:17.919686    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:34:17.919701    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:34:17.919711    5233 buildroot.go:174] setting up certificates
	I1213 11:34:17.919720    5233 provision.go:84] configureAuth start
	I1213 11:34:17.919727    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetMachineName
	I1213 11:34:17.919878    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:17.919996    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:17.920105    5233 provision.go:143] copyHostCerts
	I1213 11:34:17.920136    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920185    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:34:17.920199    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:34:17.920354    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:34:17.920585    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920616    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:34:17.920621    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:34:17.920688    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:34:17.920873    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920909    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:34:17.920914    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:34:17.920981    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:34:17.921606    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m02 san=[127.0.0.1 192.169.0.7 ha-224000-m02 localhost minikube]
	I1213 11:34:18.018851    5233 provision.go:177] copyRemoteCerts
	I1213 11:34:18.018930    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:34:18.018950    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.019110    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.019222    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.019333    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.019447    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:18.056757    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:34:18.056824    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:34:18.076340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:34:18.076402    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:34:18.095849    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:34:18.095918    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:34:18.115722    5233 provision.go:87] duration metric: took 195.866505ms to configureAuth
	I1213 11:34:18.115736    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:34:18.115914    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:18.115934    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:18.116067    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.116155    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.116267    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116362    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.116456    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.116584    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.116708    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.116716    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:34:18.177000    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:34:18.177013    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:34:18.177102    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:34:18.177115    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.177250    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.177339    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177434    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.177521    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.177668    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.177802    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.177848    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:34:18.247535    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:34:18.247560    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:18.247701    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:18.247799    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247889    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:18.247972    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:18.248144    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:18.248281    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:18.248294    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:34:19.945302    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:34:19.945316    5233 machine.go:96] duration metric: took 37.234619508s to provisionDockerMachine
	I1213 11:34:19.945325    5233 start.go:293] postStartSetup for "ha-224000-m02" (driver="hyperkit")
	I1213 11:34:19.945338    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:34:19.945348    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:19.945560    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:34:19.945574    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:19.945673    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:19.945782    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:19.945867    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:19.945970    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:19.983485    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:34:19.986722    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:34:19.986734    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:34:19.986812    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:34:19.986953    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:34:19.986959    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:34:19.987126    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:34:19.994240    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:20.014210    5233 start.go:296] duration metric: took 68.83207ms for postStartSetup
	I1213 11:34:20.014230    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.014422    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:34:20.014435    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.014537    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.014623    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.014704    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.014788    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.051647    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:34:20.051721    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:34:20.083772    5233 fix.go:56] duration metric: took 37.489367071s for fixHost
	I1213 11:34:20.083797    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.083942    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.084018    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084114    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.084207    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.084348    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:20.084490    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1213 11:34:20.084497    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:34:20.144388    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118460.015290153
	
	I1213 11:34:20.144404    5233 fix.go:216] guest clock: 1734118460.015290153
	I1213 11:34:20.144410    5233 fix.go:229] Guest: 2024-12-13 11:34:20.015290153 -0800 PST Remote: 2024-12-13 11:34:20.083787 -0800 PST m=+56.558492323 (delta=-68.496847ms)
	I1213 11:34:20.144420    5233 fix.go:200] guest clock delta is within tolerance: -68.496847ms
	I1213 11:34:20.144423    5233 start.go:83] releasing machines lock for "ha-224000-m02", held for 37.550011232s
	I1213 11:34:20.144441    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.144584    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:20.167177    5233 out.go:177] * Found network options:
	I1213 11:34:20.188040    5233 out.go:177]   - NO_PROXY=192.169.0.6
	W1213 11:34:20.210009    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.210052    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.210927    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211209    5233 main.go:141] libmachine: (ha-224000-m02) Calling .DriverName
	I1213 11:34:20.211385    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:34:20.211422    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	W1213 11:34:20.211452    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:34:20.211589    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:34:20.211610    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHHostname
	I1213 11:34:20.211651    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211865    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHPort
	I1213 11:34:20.211907    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212101    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHKeyPath
	I1213 11:34:20.212120    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212285    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetSSHUsername
	I1213 11:34:20.212303    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	I1213 11:34:20.212458    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m02/id_rsa Username:docker}
	W1213 11:34:20.245031    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:34:20.245108    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:34:20.305744    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:34:20.305779    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.305887    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.321917    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:34:20.330318    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:34:20.338449    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.338512    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:34:20.346961    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.355388    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:34:20.363629    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:34:20.371829    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:34:20.380410    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:34:20.388794    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:34:20.397231    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:34:20.405722    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:34:20.413168    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:34:20.413221    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:34:20.421725    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:34:20.429719    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.529241    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:34:20.543578    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:34:20.543670    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:34:20.554987    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.567690    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:34:20.581251    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:34:20.592466    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.603581    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:34:20.625283    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:34:20.635539    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:34:20.650656    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:34:20.653582    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:34:20.660675    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:34:20.674213    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:34:20.766147    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:34:20.880974    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:34:20.880996    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:34:20.895110    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:20.996896    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:34:23.324011    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325927019s)
	I1213 11:34:23.324083    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:34:23.334876    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.345278    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:34:23.440468    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:34:23.550842    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.658765    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:34:23.672210    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:34:23.683300    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:23.776286    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:34:23.841785    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:34:23.841892    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:34:23.847288    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:34:23.847368    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:34:23.850479    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:34:23.877340    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:34:23.877457    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.894304    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:34:23.933199    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:34:23.953827    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:34:23.975731    5233 main.go:141] libmachine: (ha-224000-m02) Calling .GetIP
	I1213 11:34:23.976228    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:34:23.980868    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:23.990424    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:34:23.990607    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:23.990844    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:23.990865    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.002451    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51860
	I1213 11:34:24.002790    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.003114    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.003125    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.003331    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.003469    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:34:24.003590    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:24.003653    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:34:24.004855    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:34:24.005135    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:24.005159    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:24.016676    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51862
	I1213 11:34:24.017013    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:24.017327    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:24.017339    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:24.017581    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:24.017710    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:34:24.017828    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.7
	I1213 11:34:24.017838    5233 certs.go:194] generating shared ca certs ...
	I1213 11:34:24.017849    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:34:24.017995    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:34:24.018055    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:34:24.018064    5233 certs.go:256] generating profile certs ...
	I1213 11:34:24.018159    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:34:24.018227    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.d29f1a5b
	I1213 11:34:24.018283    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:34:24.018291    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:34:24.018312    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:34:24.018338    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:34:24.018360    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:34:24.018382    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:34:24.018401    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:34:24.018420    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:34:24.018438    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:34:24.018527    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:34:24.018569    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:34:24.018578    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:34:24.018614    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:34:24.018649    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:34:24.018679    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:34:24.018787    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:34:24.018831    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.018854    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.018872    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.018902    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:34:24.018999    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:34:24.019091    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:34:24.019182    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:34:24.019261    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:34:24.046997    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:34:24.050721    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:34:24.059570    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:34:24.062693    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:34:24.071272    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:34:24.074372    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:34:24.083223    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:34:24.086307    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:34:24.095588    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:34:24.098711    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:34:24.107784    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:34:24.110902    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:34:24.120480    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:34:24.141070    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:34:24.160878    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:34:24.180920    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:34:24.200790    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:34:24.220908    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:34:24.240966    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:34:24.260343    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:34:24.279661    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:34:24.298866    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:34:24.318211    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:34:24.337602    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:34:24.351230    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:34:24.364930    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:34:24.378548    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:34:24.392045    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:34:24.405741    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:34:24.419366    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:34:24.433162    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:34:24.437460    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:34:24.446555    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449893    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.449949    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:34:24.454195    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:34:24.463315    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:34:24.472398    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475806    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.475869    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:34:24.480014    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:34:24.488936    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:34:24.498028    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501370    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.501420    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:34:24.505749    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:34:24.514801    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:34:24.518173    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:34:24.522615    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:34:24.526939    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:34:24.531281    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:34:24.535563    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:34:24.539842    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:34:24.544160    5233 kubeadm.go:934] updating node {m02 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1213 11:34:24.544222    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:34:24.544239    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:34:24.544284    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:34:24.557092    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:34:24.557131    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:34:24.557204    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:34:24.566007    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:34:24.566093    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:34:24.575831    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:34:24.589369    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:34:24.603027    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:34:24.616380    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:34:24.619250    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:34:24.628866    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.726853    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.741435    5233 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:34:24.741619    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:24.762788    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:34:24.783602    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:34:24.924600    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:34:24.940595    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:34:24.940795    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:34:24.940831    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:34:24.940998    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:24.941077    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:24.941083    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:24.941090    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:24.941095    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:25.941784    5233 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I1213 11:34:25.941996    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:25.942010    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:25.942024    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:25.942031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:26.943551    5233 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I1213 11:34:26.943636    5233 node_ready.go:53] error getting node "ha-224000-m02": Get "https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02": dial tcp 192.169.0.6:8443: connect: connection refused
	I1213 11:34:26.943705    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:26.943715    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:26.943726    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:26.943733    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.736951    5233 round_trippers.go:574] Response Status: 200 OK in 6791 milliseconds
	I1213 11:34:33.738522    5233 node_ready.go:49] node "ha-224000-m02" has status "Ready":"True"
	I1213 11:34:33.738535    5233 node_ready.go:38] duration metric: took 8.794739664s for node "ha-224000-m02" to be "Ready" ...
	I1213 11:34:33.738543    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:33.738582    5233 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:34:33.738592    5233 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:34:33.738642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:33.738649    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.738656    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.738661    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.750539    5233 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1213 11:34:33.759150    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.759215    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:34:33.759222    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.759229    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.759233    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.789285    5233 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1213 11:34:33.789752    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.789760    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.789766    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.789770    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799141    5233 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1213 11:34:33.799424    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.799433    5233 pod_ready.go:82] duration metric: took 40.258328ms for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799440    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.799505    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:34:33.799511    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.799516    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.799520    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.807914    5233 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1213 11:34:33.808397    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.808404    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.808415    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.808419    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.813376    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.813909    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.813919    5233 pod_ready.go:82] duration metric: took 14.470417ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813926    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.813967    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:34:33.813972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.813978    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.813982    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.817802    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.818281    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:33.818288    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.818294    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.818299    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.823207    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.823485    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.823495    5233 pod_ready.go:82] duration metric: took 9.562079ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823503    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.823545    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:34:33.823551    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.823557    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.823561    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.827781    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.828190    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:33.828197    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.828204    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.828207    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.831785    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:33.832141    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.832151    5233 pod_ready.go:82] duration metric: took 8.641657ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832159    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.832202    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:34:33.832207    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.832213    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.832219    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.836265    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:33.939780    5233 request.go:632] Waited for 102.859328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939849    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:33.939857    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:33.939865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:33.939871    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:33.946873    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:34:33.947618    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:33.947630    5233 pod_ready.go:82] duration metric: took 115.439259ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:33.947652    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.138902    5233 request.go:632] Waited for 191.1655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138938    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:34:34.138982    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.138990    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.138993    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.142609    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.339564    5233 request.go:632] Waited for 196.386923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339642    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:34.339652    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.339688    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.339702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.342232    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:34.342592    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.342602    5233 pod_ready.go:82] duration metric: took 394.853592ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.342609    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.540215    5233 request.go:632] Waited for 197.501487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540359    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:34:34.540371    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.540384    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.540391    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.544062    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:34.740387    5233 request.go:632] Waited for 195.768993ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740457    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:34.740463    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.740470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.740474    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.742464    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:34.742759    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:34.742770    5233 pod_ready.go:82] duration metric: took 400.065678ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.742777    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:34.940360    5233 request.go:632] Waited for 197.497147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940426    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:34:34.940432    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:34.940438    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:34.940442    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:34.942974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.139848    5233 request.go:632] Waited for 196.049551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139909    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:35.139915    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.139922    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.139927    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.142601    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.143154    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.143165    5233 pod_ready.go:82] duration metric: took 400.297853ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.143173    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.340241    5233 request.go:632] Waited for 196.968883ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340288    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:34:35.340294    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.340301    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.340305    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.344403    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:35.539580    5233 request.go:632] Waited for 194.599751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539614    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:35.539618    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.539625    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.539628    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.541865    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:35.542227    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:35.542236    5233 pod_ready.go:82] duration metric: took 398.973916ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.542244    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:35.739398    5233 request.go:632] Waited for 197.024136ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739550    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:35.739562    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.739574    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.739585    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.743222    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:35.939505    5233 request.go:632] Waited for 195.770633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939554    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:35.939560    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:35.939566    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:35.939572    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:35.941922    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:36.140471    5233 request.go:632] Waited for 97.089364ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140522    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.140532    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.140544    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.140552    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.143672    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.339675    5233 request.go:632] Waited for 195.459387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339785    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.339799    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.339811    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.339818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.344343    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:36.543195    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:36.543214    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.543223    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.543228    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.546614    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:36.740875    5233 request.go:632] Waited for 193.633171ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740939    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:36.740951    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:36.740963    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:36.740974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:36.745536    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:37.043269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.043284    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.043293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.043297    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.046460    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.139384    5233 request.go:632] Waited for 92.520369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139445    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.139451    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.139457    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.139461    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.141508    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.544411    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:37.544439    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.544458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.544464    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.548035    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:37.548715    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:37.548726    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:37.548734    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:37.548740    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:37.551007    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:37.551414    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:38.043335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.043360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.043371    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.043377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.046826    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:38.047379    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.047390    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.047397    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.047402    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.049403    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:38.543656    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:38.543682    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.543702    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.543709    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546343    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:38.546787    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:38.546797    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:38.546803    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:38.546807    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:38.548405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.043375    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.043397    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.043405    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.043409    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046060    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.046784    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.046792    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.046798    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.046801    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.048453    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:39.543079    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:39.543094    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.543100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.543103    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.545426    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:39.545991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:39.545999    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:39.546005    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:39.546008    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:39.548059    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:40.044134    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.044192    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.044205    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.044212    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048181    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:40.048585    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.048594    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.048600    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.048603    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.050402    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:40.050801    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:40.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:40.543772    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.543785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.543818    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.547875    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:40.548358    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:40.548366    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:40.548372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:40.548375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:40.550043    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.043443    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.043501    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.043516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.043523    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.047137    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.047586    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.047593    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.047598    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.047602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.049298    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:41.544147    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:41.544170    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.544182    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.544190    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548033    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:41.548573    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:41.548581    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:41.548587    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:41.548592    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:41.550267    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.044241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.044256    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.044264    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.044268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.046885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.047355    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.047363    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.047369    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.047373    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.049099    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.543746    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:42.543762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.543771    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.543776    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546146    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:42.546521    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:42.546529    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:42.546535    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:42.546538    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:42.548300    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:42.548618    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:43.043836    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.043862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.043875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.043884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.047393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:43.048068    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.048075    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.048082    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.048085    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.049985    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:43.544065    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:43.544086    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.544097    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.544117    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.547029    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:43.547638    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:43.547645    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:43.547651    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:43.547657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:43.549301    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.044961    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.044988    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.045023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.045031    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.048485    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.049062    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.049070    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.049076    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.049081    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.050740    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.545903    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:44.545928    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.545945    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.545956    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.549955    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:44.550463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:44.550470    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:44.550476    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:44.550479    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:44.552158    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:44.552451    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:45.045945    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.045972    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.045984    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.045991    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.049387    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:45.050098    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.050109    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.050117    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.050123    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.051738    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:45.544140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:45.544159    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.544168    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.544172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.546873    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:45.547352    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:45.547360    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:45.547366    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:45.547370    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:45.548773    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.043998    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.044020    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.044032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.044038    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047292    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.047783    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.047790    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.047795    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.047798    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.049310    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:46.544571    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:46.544597    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.544609    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.544616    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.548134    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:46.548745    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:46.548755    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:46.548762    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:46.548771    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:46.550544    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.044994    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.045015    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.045026    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.045032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.048476    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.049178    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.049189    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.049197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.049202    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.050811    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:47.051136    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:47.545774    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:47.545796    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.545809    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.545816    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.549567    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:47.550282    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:47.550292    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:47.550308    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:47.550313    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:47.552150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.044237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.044252    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.044262    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.044267    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.046593    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:48.047034    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.047041    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.047047    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.047051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.048719    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:48.544694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:48.544762    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.544781    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.544788    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548156    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:48.548805    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:48.548813    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:48.548819    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:48.548830    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:48.550405    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.045819    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.045842    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.045854    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.045864    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.049109    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.049810    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.049821    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.049828    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.049834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.051675    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:49.052058    5233 pod_ready.go:103] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"False"
	I1213 11:34:49.546343    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:49.546370    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.546384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.546391    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550058    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:49.550673    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:49.550684    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:49.550692    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:49.550697    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:49.552559    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.044335    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.044361    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.044373    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.044380    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.048285    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.048872    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.048879    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.048885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.048889    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.050497    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.544806    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:34:50.544862    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.544875    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.544885    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.548751    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.549398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.549406    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.549412    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.549416    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.550966    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.551275    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.551284    5233 pod_ready.go:82] duration metric: took 15.007121321s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551291    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.551328    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:34:50.551333    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.551338    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.551343    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.553068    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.553502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.553509    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.553514    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.553517    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.555304    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.555632    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.555640    5233 pod_ready.go:82] duration metric: took 4.343987ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555647    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.555686    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:34:50.555691    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.555696    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.555699    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.557601    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.557970    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:34:50.557977    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.557983    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.557986    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.559417    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.559883    5233 pod_ready.go:93] pod "kube-proxy-7b8ch" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.559891    5233 pod_ready.go:82] duration metric: took 4.238545ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559899    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.559932    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:34:50.559949    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.559956    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.559960    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.562004    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:50.562348    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:50.562356    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.562361    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.562365    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.563914    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.564222    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.564231    5233 pod_ready.go:82] duration metric: took 4.326466ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564237    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.564269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:34:50.564274    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.564280    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.564293    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.565929    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.566322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:50.566328    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.566334    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.566337    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.567867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:50.568197    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.568208    5233 pod_ready.go:82] duration metric: took 3.96239ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.568215    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.745519    5233 request.go:632] Waited for 177.216442ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745569    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:34:50.745584    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.745599    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.745607    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.748965    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.946816    5233 request.go:632] Waited for 197.362494ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946935    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:50.946944    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:50.946958    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:50.946964    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:50.950494    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:50.950832    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:50.950846    5233 pod_ready.go:82] duration metric: took 382.598257ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:50.950855    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.146433    5233 request.go:632] Waited for 195.515852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146519    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:34:51.146528    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.146539    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.146545    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.150256    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.346180    5233 request.go:632] Waited for 195.336158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346304    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:34:51.346314    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.346325    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.346333    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.350059    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.350701    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.350714    5233 pod_ready.go:82] duration metric: took 399.82535ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.350723    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.546175    5233 request.go:632] Waited for 195.389456ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546301    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:34:51.546322    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.546341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.546357    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.549469    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:51.745754    5233 request.go:632] Waited for 195.890122ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745865    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:34:51.745871    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.745877    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.745881    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.747825    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:34:51.748179    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:51.748191    5233 pod_ready.go:82] duration metric: took 397.435321ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.748198    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:51.945402    5233 request.go:632] Waited for 197.127949ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945442    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:34:51.945447    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:51.945453    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:51.945457    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:51.948002    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:34:52.146346    5233 request.go:632] Waited for 197.812373ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146446    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:34:52.146458    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.146470    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.146477    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.150176    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.150503    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:34:52.150514    5233 pod_ready.go:82] duration metric: took 402.286111ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:34:52.150525    5233 pod_ready.go:39] duration metric: took 18.409559513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:34:52.150552    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:34:52.150642    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.164316    5233 api_server.go:72] duration metric: took 27.417579599s to wait for apiserver process to appear ...
	I1213 11:34:52.164330    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:34:52.164347    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:34:52.168889    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:34:52.168929    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:34:52.168934    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.168946    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.168950    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.169508    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:34:52.169593    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:34:52.169605    5233 api_server.go:131] duration metric: took 5.269383ms to wait for apiserver health ...
	I1213 11:34:52.169610    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:34:52.346116    5233 request.go:632] Waited for 176.438003ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.346270    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.346282    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.346288    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.351411    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:34:52.356738    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:34:52.356755    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.356759    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.356761    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.356765    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.356768    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.356771    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.356774    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.356776    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.356780    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.356782    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.356785    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.356788    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.356791    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.356793    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.356796    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.356799    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.356802    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.356804    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.356807    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.356810    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.356813    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.356815    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.356818    5233 system_pods.go:61] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.356821    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.356823    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.356826    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.356830    5233 system_pods.go:74] duration metric: took 187.204101ms to wait for pod list to return data ...
	I1213 11:34:52.356836    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:34:52.547123    5233 request.go:632] Waited for 190.17926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547175    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:34:52.547184    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.547197    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.547205    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.550987    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.551153    5233 default_sa.go:45] found service account: "default"
	I1213 11:34:52.551169    5233 default_sa.go:55] duration metric: took 194.315508ms for default service account to be created ...
	I1213 11:34:52.551177    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:34:52.745633    5233 request.go:632] Waited for 194.336495ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745749    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:34:52.745782    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.745804    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.745815    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.750592    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:34:52.755864    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:34:52.755877    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:34:52.755881    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:34:52.755884    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:34:52.755887    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:34:52.755890    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:34:52.755893    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:34:52.755896    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:34:52.755899    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:34:52.755902    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:34:52.755905    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:34:52.755908    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:34:52.755911    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:34:52.755914    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:34:52.755917    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:34:52.755919    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:34:52.755923    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:34:52.755926    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:34:52.755929    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:34:52.755932    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:34:52.755935    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:34:52.755938    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:34:52.755941    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:34:52.755944    5233 system_pods.go:89] "kube-vip-ha-224000" [5e087427-c14c-4a6c-8a87-f20ea865cca7] Running
	I1213 11:34:52.755946    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:34:52.755952    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:34:52.755956    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running
	I1213 11:34:52.755960    5233 system_pods.go:126] duration metric: took 204.766483ms to wait for k8s-apps to be running ...
	I1213 11:34:52.755970    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:34:52.756038    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:34:52.767749    5233 system_svc.go:56] duration metric: took 11.776634ms WaitForService to wait for kubelet
	I1213 11:34:52.767765    5233 kubeadm.go:582] duration metric: took 28.020992834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:34:52.767792    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:34:52.945101    5233 request.go:632] Waited for 177.223908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945150    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:34:52.945158    5233 round_trippers.go:469] Request Headers:
	I1213 11:34:52.945170    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:34:52.945176    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:34:52.949117    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:34:52.950061    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950074    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950086    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950090    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950094    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950097    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950099    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:34:52.950102    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:34:52.950105    5233 node_conditions.go:105] duration metric: took 182.296841ms to run NodePressure ...
	I1213 11:34:52.950114    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:34:52.950132    5233 start.go:255] writing updated cluster config ...
	I1213 11:34:52.972494    5233 out.go:201] 
	I1213 11:34:52.993694    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:34:52.993820    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.016586    5233 out.go:177] * Starting "ha-224000-m03" control-plane node in "ha-224000" cluster
	I1213 11:34:53.090440    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:34:53.090478    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:34:53.090696    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:34:53.090718    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:34:53.090850    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.091713    5233 start.go:360] acquireMachinesLock for ha-224000-m03: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:34:53.091822    5233 start.go:364] duration metric: took 84.906µs to acquireMachinesLock for "ha-224000-m03"
	I1213 11:34:53.091846    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:34:53.091854    5233 fix.go:54] fixHost starting: m03
	I1213 11:34:53.092290    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:34:53.092327    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:34:53.104639    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I1213 11:34:53.104960    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:34:53.105280    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:34:53.105294    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:34:53.105531    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:34:53.105628    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.105732    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetState
	I1213 11:34:53.105817    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.105891    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 4216
	I1213 11:34:53.107018    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.107070    5233 fix.go:112] recreateIfNeeded on ha-224000-m03: state=Stopped err=<nil>
	I1213 11:34:53.107090    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	W1213 11:34:53.107166    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:34:53.128583    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m03" ...
	I1213 11:34:53.170463    5233 main.go:141] libmachine: (ha-224000-m03) Calling .Start
	I1213 11:34:53.170757    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.170820    5233 main.go:141] libmachine: (ha-224000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid
	I1213 11:34:53.173341    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid 4216 missing from process table
	I1213 11:34:53.173354    5233 main.go:141] libmachine: (ha-224000-m03) DBG | pid 4216 is in state "Stopped"
	I1213 11:34:53.173370    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid...
	I1213 11:34:53.173814    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Using UUID a949994f-ed60-4f04-8e19-b8e4ec0a7cc4
	I1213 11:34:53.198944    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Generated MAC a6:90:90:dd:31:4c
	I1213 11:34:53.198971    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:34:53.199150    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199192    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00043b710)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:34:53.199234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a949994f-ed60-4f04-8e19-b8e4ec0a7cc4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:34:53.199276    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a949994f-ed60-4f04-8e19-b8e4ec0a7cc4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/ha-224000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:34:53.199299    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:34:53.201829    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 DEBUG: hyperkit: Pid is 5320
	I1213 11:34:53.202230    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Attempt 0
	I1213 11:34:53.202250    5233 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:34:53.202308    5233 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 5320
	I1213 11:34:53.203502    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Searching for a6:90:90:dd:31:4c in /var/db/dhcpd_leases ...
	I1213 11:34:53.203593    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:34:53.203623    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:34:53.203647    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:34:53.203666    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:34:53.203681    5233 main.go:141] libmachine: (ha-224000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c98c5}
	I1213 11:34:53.203694    5233 main.go:141] libmachine: (ha-224000-m03) DBG | Found match: a6:90:90:dd:31:4c
	I1213 11:34:53.203705    5233 main.go:141] libmachine: (ha-224000-m03) DBG | IP: 192.169.0.8
	I1213 11:34:53.203714    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetConfigRaw
	I1213 11:34:53.204410    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:34:53.204623    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:34:53.205075    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:34:53.205084    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:34:53.205213    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:34:53.205302    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:34:53.205398    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205497    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:34:53.205650    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:34:53.205789    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:34:53.205928    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:34:53.205935    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:34:53.212601    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:34:53.221560    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:34:53.222531    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.222558    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.222580    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.222599    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.612220    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:34:53.612234    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:34:53.727037    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:34:53.727057    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:34:53.727094    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:34:53.727117    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:34:53.727874    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:34:53.727886    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:34:59.521710    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:34:59.521832    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:34:59.521841    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:34:59.545358    5233 main.go:141] libmachine: (ha-224000-m03) DBG | 2024/12/13 11:34:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:35:28.268303    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:35:28.268318    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268453    5233 buildroot.go:166] provisioning hostname "ha-224000-m03"
	I1213 11:35:28.268464    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.268545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.268633    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.268718    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268794    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.268890    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.269047    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.269192    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.269201    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m03 && echo "ha-224000-m03" | sudo tee /etc/hostname
	I1213 11:35:28.331907    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m03
	
	I1213 11:35:28.331923    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.332060    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.332169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332280    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.332367    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.332526    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.332658    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.332669    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:28.389916    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:28.389931    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:35:28.389961    5233 buildroot.go:174] setting up certificates
	I1213 11:35:28.389971    5233 provision.go:84] configureAuth start
	I1213 11:35:28.389982    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetMachineName
	I1213 11:35:28.390117    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:28.390208    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.390313    5233 provision.go:143] copyHostCerts
	I1213 11:35:28.390344    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390394    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:35:28.390401    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:35:28.390555    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:35:28.390787    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390820    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:35:28.390825    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:35:28.390910    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:35:28.391077    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391106    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:35:28.391111    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:35:28.391228    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:35:28.391418    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m03 san=[127.0.0.1 192.169.0.8 ha-224000-m03 localhost minikube]
	I1213 11:35:28.615259    5233 provision.go:177] copyRemoteCerts
	I1213 11:35:28.615322    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:28.615337    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.615483    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.615599    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.615704    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.615808    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:28.648163    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:35:28.648235    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:35:28.668111    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:35:28.668178    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:28.688091    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:35:28.688163    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:28.707920    5233 provision.go:87] duration metric: took 317.933618ms to configureAuth
	I1213 11:35:28.707937    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:35:28.708107    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:28.708120    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:28.708271    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.708384    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.708472    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708567    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.708672    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.708792    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.708915    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.708923    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:35:28.759762    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:35:28.759775    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:35:28.759854    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:35:28.759870    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.760005    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.760093    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760190    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.760274    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.760438    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.760606    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.760655    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:35:28.823874    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:35:28.823891    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:28.824044    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:28.824161    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824266    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:28.824376    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:28.824572    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:28.824732    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:28.824746    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:35:30.486456    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:35:30.486475    5233 machine.go:96] duration metric: took 37.280827239s to provisionDockerMachine
	I1213 11:35:30.486485    5233 start.go:293] postStartSetup for "ha-224000-m03" (driver="hyperkit")
	I1213 11:35:30.486499    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:30.486509    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.486716    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:30.486731    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.486828    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.486916    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.487008    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.487103    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.519400    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:30.522965    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:35:30.522976    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:35:30.523076    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:35:30.523222    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:35:30.523229    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:35:30.523407    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:30.531672    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:30.550850    5233 start.go:296] duration metric: took 64.356166ms for postStartSetup
	I1213 11:35:30.550875    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.551059    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:35:30.551072    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.551169    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.551256    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.551369    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.551457    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.583546    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:35:30.583619    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:35:30.638958    5233 fix.go:56] duration metric: took 37.546530399s for fixHost
	I1213 11:35:30.638984    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.639131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.639231    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639317    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.639400    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.639557    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:35:30.639690    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1213 11:35:30.639697    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:35:30.691357    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118530.813836388
	
	I1213 11:35:30.691371    5233 fix.go:216] guest clock: 1734118530.813836388
	I1213 11:35:30.691376    5233 fix.go:229] Guest: 2024-12-13 11:35:30.813836388 -0800 PST Remote: 2024-12-13 11:35:30.638973 -0800 PST m=+127.105464891 (delta=174.863388ms)
	I1213 11:35:30.691387    5233 fix.go:200] guest clock delta is within tolerance: 174.863388ms
	I1213 11:35:30.691390    5233 start.go:83] releasing machines lock for "ha-224000-m03", held for 37.598987831s
	I1213 11:35:30.691409    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.691545    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:30.716697    5233 out.go:177] * Found network options:
	I1213 11:35:30.736372    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7
	W1213 11:35:30.757863    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.757920    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.757939    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.758810    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759058    5233 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:35:30.759249    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:30.759286    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	W1213 11:35:30.759290    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:35:30.759313    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:35:30.759449    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:35:30.759471    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:35:30.759537    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759655    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:35:30.759708    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759905    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:35:30.759938    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760131    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:35:30.760152    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:35:30.760321    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	W1213 11:35:30.790341    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:30.790425    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:30.835439    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:35:30.835453    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:30.835523    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.850635    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:35:30.858947    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:35:30.867636    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.867708    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:35:30.876811    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.885325    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:35:30.893786    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:35:30.902226    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.910790    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:35:30.919236    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:35:30.927803    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:35:30.936377    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.943894    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:35:30.943955    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:35:30.952569    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.959891    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061578    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:35:31.081433    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:35:31.081517    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:35:31.100335    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.112429    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:31.127499    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:31.138533    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.148917    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:35:31.174782    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:35:31.184889    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:31.201805    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:35:31.204856    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:35:31.212060    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:35:31.225973    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:35:31.326706    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:35:31.431909    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:35:31.431936    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:35:31.446011    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.546239    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:35:33.884526    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338279376s)
	I1213 11:35:33.884605    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 11:35:33.896180    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:33.907512    5233 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 11:35:34.018152    5233 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 11:35:34.117342    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.216289    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 11:35:34.229723    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 11:35:34.241050    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:34.333405    5233 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 11:35:34.400848    5233 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 11:35:34.400950    5233 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 11:35:34.406614    5233 start.go:563] Will wait 60s for crictl version
	I1213 11:35:34.406682    5233 ssh_runner.go:195] Run: which crictl
	I1213 11:35:34.409985    5233 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 11:35:34.437608    5233 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1213 11:35:34.437696    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.456769    5233 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 11:35:34.499545    5233 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
	I1213 11:35:34.556752    5233 out.go:177]   - env NO_PROXY=192.169.0.6
	I1213 11:35:34.577782    5233 out.go:177]   - env NO_PROXY=192.169.0.6,192.169.0.7
	I1213 11:35:34.598561    5233 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:35:34.598902    5233 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:34.602518    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:34.612856    5233 mustload.go:65] Loading cluster: ha-224000
	I1213 11:35:34.613037    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:34.613269    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.613292    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.625281    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51891
	I1213 11:35:34.625655    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.626009    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.626025    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.626248    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.626340    5233 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:35:34.626428    5233 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:35:34.626490    5233 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:35:34.627676    5233 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:35:34.627955    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:35:34.627988    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:35:34.640060    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51893
	I1213 11:35:34.640392    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:35:34.640716    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:35:34.640735    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:35:34.640975    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:35:34.641081    5233 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:35:34.641190    5233 certs.go:68] Setting up /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000 for IP: 192.169.0.8
	I1213 11:35:34.641199    5233 certs.go:194] generating shared ca certs ...
	I1213 11:35:34.641214    5233 certs.go:226] acquiring lock for ca certs: {Name:mk91f965c7deab0f9461a3f3e8b07e314a206b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:34.641369    5233 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key
	I1213 11:35:34.641440    5233 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key
	I1213 11:35:34.641449    5233 certs.go:256] generating profile certs ...
	I1213 11:35:34.641547    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key
	I1213 11:35:34.641650    5233 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key.f4268d28
	I1213 11:35:34.641704    5233 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key
	I1213 11:35:34.641711    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:35:34.641732    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:35:34.641753    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:35:34.641772    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:35:34.641790    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:35:34.641809    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:35:34.641828    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:35:34.641845    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:35:34.641926    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem (1338 bytes)
	W1213 11:35:34.641977    5233 certs.go:480] ignoring /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:34.641992    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:34.642032    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem (1078 bytes)
	I1213 11:35:34.642067    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:34.642096    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:34.642163    5233 certs.go:484] found cert: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:35:34.642196    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:34.642223    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem -> /usr/share/ca-certificates/1796.pem
	I1213 11:35:34.642243    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /usr/share/ca-certificates/17962.pem
	I1213 11:35:34.642269    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:35:34.642361    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:35:34.642463    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:35:34.642554    5233 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:35:34.642635    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:35:34.669703    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 11:35:34.673030    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 11:35:34.682641    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 11:35:34.686133    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 11:35:34.695208    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 11:35:34.698292    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 11:35:34.708147    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 11:35:34.711343    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1213 11:35:34.720522    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 11:35:34.723933    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 11:35:34.733200    5233 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 11:35:34.736904    5233 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1213 11:35:34.748040    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:34.768078    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 11:35:34.787823    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:34.807347    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:34.827367    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:35:34.847452    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:35:34.866717    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:34.886226    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:34.905392    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:34.924502    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/1796.pem --> /usr/share/ca-certificates/1796.pem (1338 bytes)
	I1213 11:35:34.944848    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /usr/share/ca-certificates/17962.pem (1708 bytes)
	I1213 11:35:34.964162    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 11:35:34.977883    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 11:35:34.991483    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 11:35:35.005083    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1213 11:35:35.018833    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 11:35:35.033559    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1213 11:35:35.047330    5233 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 11:35:35.060953    5233 ssh_runner.go:195] Run: openssl version
	I1213 11:35:35.065093    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1796.pem && ln -fs /usr/share/ca-certificates/1796.pem /etc/ssl/certs/1796.pem"
	I1213 11:35:35.074224    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077601    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:14 /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.077646    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1796.pem
	I1213 11:35:35.081873    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1796.pem /etc/ssl/certs/51391683.0"
	I1213 11:35:35.091167    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17962.pem && ln -fs /usr/share/ca-certificates/17962.pem /etc/ssl/certs/17962.pem"
	I1213 11:35:35.100351    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103730    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:14 /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.103786    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17962.pem
	I1213 11:35:35.107944    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17962.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 11:35:35.116996    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 11:35:35.126132    5233 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129577    5233 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.129642    5233 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:35.133859    5233 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 11:35:35.143102    5233 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:35.146630    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:35.150908    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:35.155104    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:35.159301    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:35.163626    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:35.167845    5233 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:35.172217    5233 kubeadm.go:934] updating node {m03 192.169.0.8 8443 v1.31.2 docker true true} ...
	I1213 11:35:35.172277    5233 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-224000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-224000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:35.172296    5233 kube-vip.go:115] generating kube-vip config ...
	I1213 11:35:35.172356    5233 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1213 11:35:35.190873    5233 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1213 11:35:35.190925    5233 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 11:35:35.191004    5233 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 11:35:35.201615    5233 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 11:35:35.201692    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 11:35:35.209907    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1213 11:35:35.223540    5233 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:35.237211    5233 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1213 11:35:35.251084    5233 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:35.254255    5233 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:35:35.264617    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.363941    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.379515    5233 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 11:35:35.379713    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:35:35.453014    5233 out.go:177] * Verifying Kubernetes components...
	I1213 11:35:35.489942    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:35.641418    5233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:35.655240    5233 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:35:35.655455    5233 kapi.go:59] client config for ha-224000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/client.key", CAFile:"/Users/jenkins/minikube-integration/20090-800/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ef2ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 11:35:35.655497    5233 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.6:8443
	I1213 11:35:35.655667    5233 node_ready.go:35] waiting up to 6m0s for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:35.655710    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:35.655716    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:35.655722    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:35.655726    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:35.658541    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.157140    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:36.157157    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.157163    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.157167    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.159862    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.160261    5233 node_ready.go:49] node "ha-224000-m03" has status "Ready":"True"
	I1213 11:35:36.160270    5233 node_ready.go:38] duration metric: took 504.598087ms for node "ha-224000-m03" to be "Ready" ...
	I1213 11:35:36.160277    5233 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:35:36.160322    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:35:36.160332    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.160339    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.160345    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.164741    5233 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1213 11:35:36.170442    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:36.170504    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.170510    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.170516    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.170519    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.172921    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.173369    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.173377    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.173383    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.173390    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.175266    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:36.671483    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:36.671501    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.671508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.671513    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.674268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:36.675049    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:36.675058    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:36.675065    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:36.675069    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:36.678278    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:37.170684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.170697    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.170703    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.170706    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.173103    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.173639    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.173649    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.173659    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.173663    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.175563    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:37.670841    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:37.670859    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.670867    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.670870    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.673709    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:37.674599    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:37.674609    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:37.674616    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:37.674619    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:37.677468    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.171983    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.172002    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.172010    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.172014    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.174562    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.175168    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.175176    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.175183    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.175186    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.177058    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:38.177428    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:38.671814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:38.671831    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.671839    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.671843    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.674211    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:38.674978    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:38.674987    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:38.674994    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:38.675005    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:38.677077    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.171353    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.171371    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.171379    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.171383    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.173885    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.174765    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.174780    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.174787    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.174791    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.176969    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.672084    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:39.672101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.672107    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.672111    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.674182    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:39.674701    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:39.674709    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:39.674715    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:39.674719    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:39.676491    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.170778    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.170793    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.170801    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.170805    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.172716    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.173201    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.173209    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.173215    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.173218    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.174782    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.670537    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:40.670554    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.670561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.670564    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.672905    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:40.673371    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:40.673378    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:40.673384    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:40.673388    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:40.675334    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:40.675698    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:41.170540    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.170555    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.170561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.170565    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.172610    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:41.173071    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.173079    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.173086    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.173090    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.174669    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.670954    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:41.670970    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.670977    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.670980    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.672906    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:41.673327    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:41.673335    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:41.673341    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:41.673346    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:41.674840    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.171591    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.171607    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.171614    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.171626    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.173848    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.174323    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.174331    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.174336    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.174339    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.176072    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:42.670670    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:42.670685    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.670691    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.670695    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.672916    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:42.673334    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:42.673342    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:42.673348    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:42.673352    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:42.674953    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.171018    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.171035    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.171041    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.171044    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.173500    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.173933    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.173942    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.173948    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.173952    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.175797    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:43.176282    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:43.671883    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:43.671900    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.671909    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.671914    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.674489    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:43.674937    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:43.674945    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:43.674952    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:43.674959    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:43.676652    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.171731    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.171747    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.171754    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.171757    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174220    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:44.174839    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.174847    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.174853    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.174858    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.176592    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:44.671463    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:44.671523    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.671535    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.671543    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.674700    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:44.675156    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:44.675163    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:44.675169    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:44.675172    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:44.676845    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:45.170845    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.170871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.170883    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.170890    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.174136    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:45.174847    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.174855    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.174861    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.174865    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.177051    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.177329    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:45.671539    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:45.671565    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.671577    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.671584    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.674504    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:45.674930    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:45.674937    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:45.674944    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:45.674948    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:45.676902    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.171017    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.171043    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.171055    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.171064    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.174349    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:46.175105    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.175113    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.175119    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.175123    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.176671    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:46.670718    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:46.670742    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.670753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.670760    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.673727    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:46.674143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:46.674150    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:46.674155    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:46.674159    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:46.675697    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.171141    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.171167    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.171181    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.171188    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.174674    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:47.175237    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.175248    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.175256    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.175283    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.177291    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:47.177630    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:47.670502    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:47.670539    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.670550    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.670555    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.673105    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:47.673592    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:47.673603    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:47.673624    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:47.673631    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:47.675150    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.170714    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.170743    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.170753    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.170759    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.174068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:48.174871    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.174879    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.174885    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.174888    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.176423    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:48.671508    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:48.671547    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.671558    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.671563    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.673769    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:48.674261    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:48.674268    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:48.674274    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:48.674276    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:48.676263    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.170991    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.171006    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.171015    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.171020    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173356    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.173868    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.173876    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.173882    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.173893    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.175974    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.671308    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:49.671349    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.671359    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.671375    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.674049    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:49.674657    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:49.674666    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:49.674672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:49.674676    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:49.676408    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:49.676866    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:50.170526    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.170546    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.170555    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.170560    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.172951    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.173418    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.173454    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.173462    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.173467    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.175187    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:50.671268    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:50.671306    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.671315    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.671319    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.673518    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:50.674124    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:50.674132    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:50.674139    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:50.674142    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:50.675972    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.172292    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.172318    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.172329    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.172335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.175388    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:51.176242    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.176250    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.176255    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.176271    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.178034    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.672241    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:51.672259    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.672268    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.672273    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.674716    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:51.675171    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:51.675178    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:51.675184    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:51.675187    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:51.677031    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:51.677333    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:52.171324    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.171350    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.171394    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.171403    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.174624    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:52.175339    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.175347    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.175353    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.175356    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.176912    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.672143    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:52.672156    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.672163    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.672166    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674142    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:52.674648    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:52.674656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:52.674662    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:52.674665    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:52.676343    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.171789    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.171834    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.171845    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.171850    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.173997    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.174633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.174641    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.174647    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.174652    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.176489    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:53.671689    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.671702    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.671708    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.674629    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:53.675317    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:53.675324    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:53.675330    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:53.675335    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:53.677039    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:53.677545    5233 pod_ready.go:103] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"False"
	I1213 11:35:54.172269    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.172296    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.172309    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.172316    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.175190    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:54.175863    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.175871    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.175880    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.175884    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.177695    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:54.671631    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:54.671656    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.671679    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.671687    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.674858    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:54.675633    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:54.675644    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:54.675652    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:54.675659    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:54.677622    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.172159    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.172183    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.172195    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.172200    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.175352    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.175951    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.175961    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.175969    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.175974    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.177826    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.672525    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5ds6r
	I1213 11:35:55.672548    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.672561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.672568    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676200    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:55.676655    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.676663    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.676669    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.676672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.679603    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.680007    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.680026    5233 pod_ready.go:82] duration metric: took 19.509731372s for pod "coredns-7c65d6cfc9-5ds6r" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680040    5233 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.680088    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sswfx
	I1213 11:35:55.680094    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.680100    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.680104    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.682544    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.683008    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.683017    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.683023    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.683027    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.684867    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.685203    5233 pod_ready.go:93] pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.685212    5233 pod_ready.go:82] duration metric: took 5.165234ms for pod "coredns-7c65d6cfc9-sswfx" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685222    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.685259    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000
	I1213 11:35:55.685264    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.685270    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.685274    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.687013    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.687444    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:55.687452    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.687458    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.687463    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689192    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.689502    5233 pod_ready.go:93] pod "etcd-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.689510    5233 pod_ready.go:82] duration metric: took 4.282723ms for pod "etcd-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689517    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.689546    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m02
	I1213 11:35:55.689551    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.689557    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.689561    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691520    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.691918    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:55.691926    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.691932    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.691935    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.693585    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.694009    5233 pod_ready.go:93] pod "etcd-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.694017    5233 pod_ready.go:82] duration metric: took 4.494586ms for pod "etcd-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694023    5233 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.694061    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-224000-m03
	I1213 11:35:55.694066    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.694071    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.694074    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.696047    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:55.696583    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:55.696591    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.696597    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.696602    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.698695    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:55.699182    5233 pod_ready.go:93] pod "etcd-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:55.699191    5233 pod_ready.go:82] duration metric: took 5.162024ms for pod "etcd-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.699204    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:55.873308    5233 request.go:632] Waited for 174.059147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873398    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000
	I1213 11:35:55.873409    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:55.873420    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:55.873432    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:55.877057    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.073941    5233 request.go:632] Waited for 196.465756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073990    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:56.073998    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.074007    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.074015    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.076268    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.076663    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.076673    5233 pod_ready.go:82] duration metric: took 377.466982ms for pod "kube-apiserver-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.076681    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.272907    5233 request.go:632] Waited for 196.189621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272950    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m02
	I1213 11:35:56.272958    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.272967    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.272973    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.275118    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.473781    5233 request.go:632] Waited for 198.215756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473814    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:56.473818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.473825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.473834    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.476052    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.476328    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.476337    5233 pod_ready.go:82] duration metric: took 399.655338ms for pod "kube-apiserver-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.476344    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.672963    5233 request.go:632] Waited for 196.573548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673025    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-224000-m03
	I1213 11:35:56.673042    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.673069    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.673082    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.676053    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:56.874041    5233 request.go:632] Waited for 197.242072ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874093    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:56.874101    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:56.874112    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:56.874148    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:56.877393    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:56.877917    5233 pod_ready.go:93] pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:56.877925    5233 pod_ready.go:82] duration metric: took 401.579167ms for pod "kube-apiserver-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:56.877932    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.072677    5233 request.go:632] Waited for 194.687466ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072807    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000
	I1213 11:35:57.072818    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.072829    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.072837    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.076583    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:57.273280    5233 request.go:632] Waited for 195.960523ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273356    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:57.273364    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.273372    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.273377    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.275590    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:57.275864    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.275873    5233 pod_ready.go:82] duration metric: took 397.938639ms for pod "kube-controller-manager-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.275887    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.473240    5233 request.go:632] Waited for 197.314418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473276    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m02
	I1213 11:35:57.473282    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.473288    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.473293    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.479318    5233 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1213 11:35:57.672800    5233 request.go:632] Waited for 192.751323ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672854    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:57.672865    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.672879    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.672883    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.674679    5233 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1213 11:35:57.674953    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:57.674964    5233 pod_ready.go:82] duration metric: took 399.075588ms for pod "kube-controller-manager-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.674971    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:57.872629    5233 request.go:632] Waited for 197.615913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872684    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-224000-m03
	I1213 11:35:57.872690    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:57.872698    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:57.872704    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:57.875523    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.072684    5233 request.go:632] Waited for 196.666527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072801    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:58.072814    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.072825    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.072835    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.076186    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.076572    5233 pod_ready.go:93] pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.076584    5233 pod_ready.go:82] duration metric: took 401.611001ms for pod "kube-controller-manager-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.076594    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.272566    5233 request.go:632] Waited for 195.927789ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272623    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7b8ch
	I1213 11:35:58.272631    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.272639    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.272646    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.275090    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.473816    5233 request.go:632] Waited for 198.141217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473894    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m04
	I1213 11:35:58.473905    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.473916    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.473922    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.476808    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:58.477275    5233 pod_ready.go:98] node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477286    5233 pod_ready.go:82] duration metric: took 400.69023ms for pod "kube-proxy-7b8ch" in "kube-system" namespace to be "Ready" ...
	E1213 11:35:58.477294    5233 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-224000-m04" hosting pod "kube-proxy-7b8ch" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-224000-m04" has status "Ready":"Unknown"
	I1213 11:35:58.477302    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.672582    5233 request.go:632] Waited for 195.231932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672629    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wj7k
	I1213 11:35:58.672638    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.672649    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.672657    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.676219    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.873974    5233 request.go:632] Waited for 197.337714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874026    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:35:58.874034    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:58.874045    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:58.874051    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:58.877592    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:58.877988    5233 pod_ready.go:93] pod "kube-proxy-9wj7k" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:58.878000    5233 pod_ready.go:82] duration metric: took 400.696273ms for pod "kube-proxy-9wj7k" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:58.878009    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.073381    5233 request.go:632] Waited for 195.314343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073433    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wsr4
	I1213 11:35:59.073441    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.073449    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.073455    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.075792    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.273216    5233 request.go:632] Waited for 196.949491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273267    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:35:59.273283    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.273292    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.273298    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.275702    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:35:59.276247    5233 pod_ready.go:93] pod "kube-proxy-9wsr4" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.276258    5233 pod_ready.go:82] duration metric: took 398.245999ms for pod "kube-proxy-9wsr4" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.276265    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.473693    5233 request.go:632] Waited for 197.370074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473831    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmw9z
	I1213 11:35:59.473842    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.473854    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.473862    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.477420    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.672646    5233 request.go:632] Waited for 194.659895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672759    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:35:59.672771    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.672784    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.672794    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.676016    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:35:59.676434    5233 pod_ready.go:93] pod "kube-proxy-gmw9z" in "kube-system" namespace has status "Ready":"True"
	I1213 11:35:59.676444    5233 pod_ready.go:82] duration metric: took 400.177932ms for pod "kube-proxy-gmw9z" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.676451    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:35:59.873284    5233 request.go:632] Waited for 196.790328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873409    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000
	I1213 11:35:59.873424    5233 round_trippers.go:469] Request Headers:
	I1213 11:35:59.873437    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:35:59.873446    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:35:59.876647    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.072905    5233 request.go:632] Waited for 195.872865ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073011    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000
	I1213 11:36:00.073019    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.073028    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.073032    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.076068    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.076488    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.076498    5233 pod_ready.go:82] duration metric: took 400.046456ms for pod "kube-scheduler-ha-224000" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.076506    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.273249    5233 request.go:632] Waited for 196.676645ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273361    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m02
	I1213 11:36:00.273380    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.273405    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.273414    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.276870    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.473222    5233 request.go:632] Waited for 195.664041ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473283    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m02
	I1213 11:36:00.473291    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.473300    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.473304    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.475794    5233 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1213 11:36:00.476078    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.476087    5233 pod_ready.go:82] duration metric: took 399.579687ms for pod "kube-scheduler-ha-224000-m02" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.476096    5233 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.674009    5233 request.go:632] Waited for 197.794547ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674081    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-224000-m03
	I1213 11:36:00.674092    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.674106    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.674121    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.677780    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.873417    5233 request.go:632] Waited for 194.907567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873476    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes/ha-224000-m03
	I1213 11:36:00.873488    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.873500    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.873508    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.876715    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:00.877199    5233 pod_ready.go:93] pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace has status "Ready":"True"
	I1213 11:36:00.877213    5233 pod_ready.go:82] duration metric: took 401.11429ms for pod "kube-scheduler-ha-224000-m03" in "kube-system" namespace to be "Ready" ...
	I1213 11:36:00.877234    5233 pod_ready.go:39] duration metric: took 24.717168247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 11:36:00.877249    5233 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:36:00.877335    5233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:00.889500    5233 api_server.go:72] duration metric: took 25.510179125s to wait for apiserver process to appear ...
	I1213 11:36:00.889514    5233 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:36:00.889525    5233 api_server.go:253] Checking apiserver healthz at https://192.169.0.6:8443/healthz ...
	I1213 11:36:00.892661    5233 api_server.go:279] https://192.169.0.6:8443/healthz returned 200:
	ok
	I1213 11:36:00.892694    5233 round_trippers.go:463] GET https://192.169.0.6:8443/version
	I1213 11:36:00.892700    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:00.892706    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:00.892710    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:00.893221    5233 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1213 11:36:00.893255    5233 api_server.go:141] control plane version: v1.31.2
	I1213 11:36:00.893263    5233 api_server.go:131] duration metric: took 3.744726ms to wait for apiserver health ...
	I1213 11:36:00.893268    5233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:36:01.073160    5233 request.go:632] Waited for 179.837088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073311    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.073322    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.073333    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.073340    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.081092    5233 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1213 11:36:01.086508    5233 system_pods.go:59] 26 kube-system pods found
	I1213 11:36:01.086526    5233 system_pods.go:61] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.086530    5233 system_pods.go:61] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.086533    5233 system_pods.go:61] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.086543    5233 system_pods.go:61] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.086547    5233 system_pods.go:61] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.086550    5233 system_pods.go:61] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.086553    5233 system_pods.go:61] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.086555    5233 system_pods.go:61] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.086559    5233 system_pods.go:61] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.086565    5233 system_pods.go:61] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.086569    5233 system_pods.go:61] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.086572    5233 system_pods.go:61] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.086575    5233 system_pods.go:61] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.086579    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.086582    5233 system_pods.go:61] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.086585    5233 system_pods.go:61] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.086588    5233 system_pods.go:61] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.086591    5233 system_pods.go:61] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.086593    5233 system_pods.go:61] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.086596    5233 system_pods.go:61] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.086600    5233 system_pods.go:61] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.086602    5233 system_pods.go:61] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.086606    5233 system_pods.go:61] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.086609    5233 system_pods.go:61] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.086612    5233 system_pods.go:61] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.086616    5233 system_pods.go:61] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.086622    5233 system_pods.go:74] duration metric: took 193.351906ms to wait for pod list to return data ...
	I1213 11:36:01.086629    5233 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:36:01.272667    5233 request.go:632] Waited for 185.987795ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272763    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/default/serviceaccounts
	I1213 11:36:01.272774    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.272785    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.272793    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.276315    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.276400    5233 default_sa.go:45] found service account: "default"
	I1213 11:36:01.276412    5233 default_sa.go:55] duration metric: took 189.780655ms for default service account to be created ...
	I1213 11:36:01.276419    5233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:36:01.473526    5233 request.go:632] Waited for 197.034094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473601    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/namespaces/kube-system/pods
	I1213 11:36:01.473653    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.473672    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.473680    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.479025    5233 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1213 11:36:01.484476    5233 system_pods.go:86] 26 kube-system pods found
	I1213 11:36:01.484489    5233 system_pods.go:89] "coredns-7c65d6cfc9-5ds6r" [c9fef76c-5d01-46c3-8582-9b8f6d1db959] Running
	I1213 11:36:01.484495    5233 system_pods.go:89] "coredns-7c65d6cfc9-sswfx" [cc3f6cf5-bd73-4549-9d3f-21a70cd4e343] Running
	I1213 11:36:01.484499    5233 system_pods.go:89] "etcd-ha-224000" [e37cb943-f2ad-4534-95e1-b58fb75bd290] Running
	I1213 11:36:01.484502    5233 system_pods.go:89] "etcd-ha-224000-m02" [21a29657-2b28-425e-a5a0-2eec80e86c85] Running
	I1213 11:36:01.484506    5233 system_pods.go:89] "etcd-ha-224000-m03" [0258e957-302a-4b3d-ab37-fd7389104ba1] Running
	I1213 11:36:01.484508    5233 system_pods.go:89] "kindnet-687js" [11bb9217-ee8e-4c36-b3e1-df6ae829b17f] Running
	I1213 11:36:01.484511    5233 system_pods.go:89] "kindnet-c6kgd" [a71acedc-2646-4168-8001-1eb70fef09f9] Running
	I1213 11:36:01.484516    5233 system_pods.go:89] "kindnet-g6ss2" [57ab1c4e-f12d-4535-9778-02a254a8e91e] Running
	I1213 11:36:01.484518    5233 system_pods.go:89] "kindnet-kpjh5" [d5770b31-991f-43c2-82a4-f0051e25f645] Running
	I1213 11:36:01.484522    5233 system_pods.go:89] "kube-apiserver-ha-224000" [0711cf87-e62e-4df4-b57b-3752a85cb784] Running
	I1213 11:36:01.484524    5233 system_pods.go:89] "kube-apiserver-ha-224000-m02" [e59f5108-8b50-4eeb-b59b-dc037126303f] Running
	I1213 11:36:01.484527    5233 system_pods.go:89] "kube-apiserver-ha-224000-m03" [5f8c4c36-0655-42bc-9999-ef97d8143712] Running
	I1213 11:36:01.484531    5233 system_pods.go:89] "kube-controller-manager-ha-224000" [f2737c1e-2346-472c-9d2f-cb809744e251] Running
	I1213 11:36:01.484534    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m02" [535b5eae-b24a-49ae-b10c-0bd7dc79ae7d] Running
	I1213 11:36:01.484538    5233 system_pods.go:89] "kube-controller-manager-ha-224000-m03" [dcd61cf0-0a1b-48bd-a6ee-3afe1c057e72] Running
	I1213 11:36:01.484540    5233 system_pods.go:89] "kube-proxy-7b8ch" [62659dc9-7517-4cfe-bbf1-5f327752ccbc] Running
	I1213 11:36:01.484543    5233 system_pods.go:89] "kube-proxy-9wj7k" [6164bffc-eff9-49b2-8319-9bfba4e43312] Running
	I1213 11:36:01.484546    5233 system_pods.go:89] "kube-proxy-9wsr4" [fa0a1916-afa5-412f-a059-8dc19c68a7a7] Running
	I1213 11:36:01.484549    5233 system_pods.go:89] "kube-proxy-gmw9z" [4b9ed970-5ad3-4b15-a714-24f0f06632c8] Running
	I1213 11:36:01.484552    5233 system_pods.go:89] "kube-scheduler-ha-224000" [49425ce1-ac48-4015-af6a-7f83188a6c8d] Running
	I1213 11:36:01.484555    5233 system_pods.go:89] "kube-scheduler-ha-224000-m02" [f863de2b-b01e-4288-a9bd-b914a500a7ba] Running
	I1213 11:36:01.484558    5233 system_pods.go:89] "kube-scheduler-ha-224000-m03" [edb13f66-4f29-4d80-9a5d-f91d4f2c1f43] Running
	I1213 11:36:01.484561    5233 system_pods.go:89] "kube-vip-ha-224000" [6ca3e782-dd8d-4dd1-a888-c9a3c0b605a3] Running
	I1213 11:36:01.484563    5233 system_pods.go:89] "kube-vip-ha-224000-m02" [c6ad328e-6073-479a-a61e-8d92f3937cac] Running
	I1213 11:36:01.484567    5233 system_pods.go:89] "kube-vip-ha-224000-m03" [f2d96bf8-ab2d-48e8-a760-029ae1e9aabb] Running
	I1213 11:36:01.484571    5233 system_pods.go:89] "storage-provisioner" [b3bd2963-cd6d-462d-9162-3ac606e91850] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:36:01.484576    5233 system_pods.go:126] duration metric: took 208.153776ms to wait for k8s-apps to be running ...
	I1213 11:36:01.484587    5233 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:36:01.484655    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:36:01.495689    5233 system_svc.go:56] duration metric: took 11.101939ms WaitForService to wait for kubelet
	I1213 11:36:01.495712    5233 kubeadm.go:582] duration metric: took 26.116392116s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:36:01.495725    5233 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:36:01.673624    5233 request.go:632] Waited for 177.853394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673726    5233 round_trippers.go:463] GET https://192.169.0.6:8443/api/v1/nodes
	I1213 11:36:01.673737    5233 round_trippers.go:469] Request Headers:
	I1213 11:36:01.673747    5233 round_trippers.go:473]     Accept: application/json, */*
	I1213 11:36:01.673785    5233 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1213 11:36:01.677584    5233 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1213 11:36:01.678344    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678354    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678360    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678364    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678367    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678369    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678372    5233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 11:36:01.678375    5233 node_conditions.go:123] node cpu capacity is 2
	I1213 11:36:01.678378    5233 node_conditions.go:105] duration metric: took 182.650917ms to run NodePressure ...
	I1213 11:36:01.678389    5233 start.go:241] waiting for startup goroutines ...
	I1213 11:36:01.678404    5233 start.go:255] writing updated cluster config ...
	I1213 11:36:01.701519    5233 out.go:201] 
	I1213 11:36:01.755040    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:01.755118    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.792739    5233 out.go:177] * Starting "ha-224000-m04" worker node in "ha-224000" cluster
	I1213 11:36:01.850695    5233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:36:01.850719    5233 cache.go:56] Caching tarball of preloaded images
	I1213 11:36:01.850830    5233 preload.go:172] Found /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 11:36:01.850840    5233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1213 11:36:01.850919    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.851367    5233 start.go:360] acquireMachinesLock for ha-224000-m04: {Name:mkd8725f0f3fb228f1db0d65c3b846c1694ab04b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 11:36:01.851417    5233 start.go:364] duration metric: took 38.664µs to acquireMachinesLock for "ha-224000-m04"
	I1213 11:36:01.851430    5233 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:36:01.851435    5233 fix.go:54] fixHost starting: m04
	I1213 11:36:01.851670    5233 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:36:01.851689    5233 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:36:01.863548    5233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I1213 11:36:01.863864    5233 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:36:01.864237    5233 main.go:141] libmachine: Using API Version  1
	I1213 11:36:01.864251    5233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:36:01.864489    5233 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:36:01.864595    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.864718    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:36:01.864801    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.864873    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 4360
	I1213 11:36:01.866047    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid 4360 missing from process table
	I1213 11:36:01.866070    5233 fix.go:112] recreateIfNeeded on ha-224000-m04: state=Stopped err=<nil>
	I1213 11:36:01.866083    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	W1213 11:36:01.866170    5233 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:36:01.886701    5233 out.go:177] * Restarting existing hyperkit VM for "ha-224000-m04" ...
	I1213 11:36:01.927945    5233 main.go:141] libmachine: (ha-224000-m04) Calling .Start
	I1213 11:36:01.928215    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.928249    5233 main.go:141] libmachine: (ha-224000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid
	I1213 11:36:01.928315    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Using UUID 3aa2edb2-289d-46e2-9534-1f9a2dff1012
	I1213 11:36:01.954122    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Generated MAC e2:d2:09:69:a8:b4
	I1213 11:36:01.954144    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000
	I1213 11:36:01.954348    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954378    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3aa2edb2-289d-46e2-9534-1f9a2dff1012", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f0e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1213 11:36:01.954426    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3aa2edb2-289d-46e2-9534-1f9a2dff1012", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-22
4000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"}
	I1213 11:36:01.954465    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3aa2edb2-289d-46e2-9534-1f9a2dff1012 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/ha-224000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/tty,log=/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/bzimage,/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-224000"
	I1213 11:36:01.954478    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1213 11:36:01.956069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 DEBUG: hyperkit: Pid is 5375
	I1213 11:36:01.956512    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Attempt 0
	I1213 11:36:01.956527    5233 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:36:01.956630    5233 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:36:01.959334    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Searching for e2:d2:09:69:a8:b4 in /var/db/dhcpd_leases ...
	I1213 11:36:01.959473    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1213 11:36:01.959490    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a6:90:90:dd:31:4c ID:1,a6:90:90:dd:31:4c Lease:0x675c9a76}
	I1213 11:36:01.959506    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:fa:54:eb:53:13:e6 ID:1,fa:54:eb:53:13:e6 Lease:0x675c9a30}
	I1213 11:36:01.959522    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e2:1f:26:f2:db:4d ID:1,e2:1f:26:f2:db:4d Lease:0x675c9a1d}
	I1213 11:36:01.959533    5233 main.go:141] libmachine: (ha-224000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:d2:09:69:a8:b4 ID:1,e2:d2:9:69:a8:b4 Lease:0x675c8be9}
	I1213 11:36:01.959548    5233 main.go:141] libmachine: (ha-224000-m04) DBG | Found match: e2:d2:09:69:a8:b4
	I1213 11:36:01.959568    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetConfigRaw
	I1213 11:36:01.959573    5233 main.go:141] libmachine: (ha-224000-m04) DBG | IP: 192.169.0.9
	I1213 11:36:01.960365    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:01.960553    5233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/ha-224000/config.json ...
	I1213 11:36:01.960997    5233 machine.go:93] provisionDockerMachine start ...
	I1213 11:36:01.961019    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:01.961190    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:01.961347    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:01.961451    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961542    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:01.961646    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:01.961799    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:01.961972    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:01.961979    5233 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 11:36:01.968096    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1213 11:36:01.976979    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1213 11:36:01.978042    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:01.978064    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:01.978076    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:01.978087    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.370264    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1213 11:36:02.370282    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1213 11:36:02.485027    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1213 11:36:02.485059    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1213 11:36:02.485069    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1213 11:36:02.485077    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1213 11:36:02.485882    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1213 11:36:02.485893    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1213 11:36:08.339296    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1213 11:36:08.339331    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1213 11:36:08.339343    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1213 11:36:08.362659    5233 main.go:141] libmachine: (ha-224000-m04) DBG | 2024/12/13 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1213 11:36:37.019941    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 11:36:37.019956    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020079    5233 buildroot.go:166] provisioning hostname "ha-224000-m04"
	I1213 11:36:37.020091    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.020181    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.020268    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.020362    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020446    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.020550    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.020691    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.020850    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.020859    5233 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-224000-m04 && echo "ha-224000-m04" | sudo tee /etc/hostname
	I1213 11:36:37.079455    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-224000-m04
	
	I1213 11:36:37.079470    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.079611    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.079712    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079807    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.079899    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.080050    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.080202    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.080213    5233 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-224000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-224000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-224000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:36:37.138441    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:36:37.138458    5233 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20090-800/.minikube CaCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20090-800/.minikube}
	I1213 11:36:37.138471    5233 buildroot.go:174] setting up certificates
	I1213 11:36:37.138478    5233 provision.go:84] configureAuth start
	I1213 11:36:37.138489    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetMachineName
	I1213 11:36:37.138635    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:37.138758    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.138874    5233 provision.go:143] copyHostCerts
	I1213 11:36:37.138906    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.138980    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem, removing ...
	I1213 11:36:37.138987    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem
	I1213 11:36:37.139126    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/ca.pem (1078 bytes)
	I1213 11:36:37.139340    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139389    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem, removing ...
	I1213 11:36:37.139394    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem
	I1213 11:36:37.139490    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/cert.pem (1123 bytes)
	I1213 11:36:37.139651    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139700    5233 exec_runner.go:144] found /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem, removing ...
	I1213 11:36:37.139705    5233 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem
	I1213 11:36:37.139785    5233 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20090-800/.minikube/key.pem (1675 bytes)
	I1213 11:36:37.139956    5233 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca-key.pem org=jenkins.ha-224000-m04 san=[127.0.0.1 192.169.0.9 ha-224000-m04 localhost minikube]
	I1213 11:36:37.316710    5233 provision.go:177] copyRemoteCerts
	I1213 11:36:37.316783    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:36:37.316812    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.316958    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.317051    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.317152    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.317246    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:37.347920    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:36:37.347992    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:36:37.367331    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:36:37.367418    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:36:37.387377    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:36:37.387449    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 11:36:37.407116    5233 provision.go:87] duration metric: took 268.631983ms to configureAuth
	I1213 11:36:37.407131    5233 buildroot.go:189] setting minikube options for container-runtime
	I1213 11:36:37.407332    5233 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:36:37.407364    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:37.407494    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.407580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.407680    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407756    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.407841    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.407978    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.408110    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.408119    5233 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 11:36:37.455460    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1213 11:36:37.455475    5233 buildroot.go:70] root file system type: tmpfs
	I1213 11:36:37.455568    5233 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 11:36:37.455579    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.455716    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.455822    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.455928    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.456017    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.456183    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.456322    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.456371    5233 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.6"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7"
	Environment="NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 11:36:37.514210    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.6
	Environment=NO_PROXY=192.169.0.6,192.169.0.7
	Environment=NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 11:36:37.514229    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:37.514369    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:37.514460    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514608    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:37.514700    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:37.514873    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:37.515015    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:37.515027    5233 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 11:36:39.106697    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1213 11:36:39.106713    5233 machine.go:96] duration metric: took 37.146099544s to provisionDockerMachine
	I1213 11:36:39.106722    5233 start.go:293] postStartSetup for "ha-224000-m04" (driver="hyperkit")
	I1213 11:36:39.106729    5233 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:36:39.106741    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.106958    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:36:39.106972    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.107076    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.107171    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.107250    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.107377    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.137664    5233 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:36:39.140876    5233 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 11:36:39.140886    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/addons for local assets ...
	I1213 11:36:39.140989    5233 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20090-800/.minikube/files for local assets ...
	I1213 11:36:39.141205    5233 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> 17962.pem in /etc/ssl/certs
	I1213 11:36:39.141216    5233 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem -> /etc/ssl/certs/17962.pem
	I1213 11:36:39.141482    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:36:39.148686    5233 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/ssl/certs/17962.pem --> /etc/ssl/certs/17962.pem (1708 bytes)
	I1213 11:36:39.168356    5233 start.go:296] duration metric: took 61.625015ms for postStartSetup
	I1213 11:36:39.168377    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.168566    5233 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1213 11:36:39.168580    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.168694    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.168784    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.168873    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.168955    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.200288    5233 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1213 11:36:39.200368    5233 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1213 11:36:39.252642    5233 fix.go:56] duration metric: took 37.401602513s for fixHost
	I1213 11:36:39.252667    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.252828    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.252931    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253035    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.253138    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.253294    5233 main.go:141] libmachine: Using SSH client type: native
	I1213 11:36:39.253427    5233 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x634c360] 0x634f040 <nil>  [] 0s} 192.169.0.9 22 <nil> <nil>}
	I1213 11:36:39.253435    5233 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 11:36:39.303241    5233 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734118599.429050956
	
	I1213 11:36:39.303262    5233 fix.go:216] guest clock: 1734118599.429050956
	I1213 11:36:39.303272    5233 fix.go:229] Guest: 2024-12-13 11:36:39.429050956 -0800 PST Remote: 2024-12-13 11:36:39.252657 -0800 PST m=+195.719809020 (delta=176.393956ms)
	I1213 11:36:39.303284    5233 fix.go:200] guest clock delta is within tolerance: 176.393956ms
	I1213 11:36:39.303287    5233 start.go:83] releasing machines lock for "ha-224000-m04", held for 37.452264193s
	I1213 11:36:39.303304    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.303439    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:36:39.324718    5233 out.go:177] * Found network options:
	I1213 11:36:39.345593    5233 out.go:177]   - NO_PROXY=192.169.0.6,192.169.0.7,192.169.0.8
	W1213 11:36:39.367406    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367428    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.367438    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.367453    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367872    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.367964    5233 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:36:39.368045    5233 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:36:39.368067    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	W1213 11:36:39.368071    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368083    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	W1213 11:36:39.368091    5233 proxy.go:119] fail to check proxy env: Error ip not in block
	I1213 11:36:39.368153    5233 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 11:36:39.368162    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368165    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:36:39.368280    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368311    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:36:39.368396    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:36:39.368417    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368502    5233 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:36:39.368516    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:36:39.368581    5233 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	W1213 11:36:39.395349    5233 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:36:39.395429    5233 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:36:39.444914    5233 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 11:36:39.444929    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.445000    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.460519    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1213 11:36:39.468747    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:36:39.476970    5233 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:36:39.477028    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:36:39.485250    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.493728    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:36:39.501920    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:39.510067    5233 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:36:39.518621    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:36:39.527064    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:36:39.535503    5233 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:36:39.544105    5233 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:36:39.551996    5233 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 11:36:39.552057    5233 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 11:36:39.560903    5233 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:36:39.569057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:39.663026    5233 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:36:39.681615    5233 start.go:495] detecting cgroup driver to use...
	I1213 11:36:39.681707    5233 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 11:36:39.701692    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.713515    5233 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:36:39.733157    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:36:39.744420    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.755241    5233 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:36:39.778169    5233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:39.788619    5233 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:39.803742    5233 ssh_runner.go:195] Run: which cri-dockerd
	I1213 11:36:39.806753    5233 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 11:36:39.814222    5233 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1213 11:36:39.828173    5233 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 11:36:39.923220    5233 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 11:36:40.025879    5233 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 11:36:40.025908    5233 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 11:36:40.040057    5233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:40.139577    5233 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 11:37:41.169349    5233 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.030424073s)
	I1213 11:37:41.169444    5233 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1213 11:37:41.204399    5233 out.go:201] 
	W1213 11:37:41.225442    5233 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 13 19:36:37 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427068027Z" level=info msg="Starting up"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.427760840Z" level=info msg="containerd not running, starting managed containerd"
	Dec 13 19:36:37 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:37.428340753Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.446225003Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461418150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461538159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461607016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461644040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461775643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461826393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.461966604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462007624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462040126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462069720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462182838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.462429601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464011795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464067757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464257837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464302280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464410649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.464463860Z" level=info msg="metadata content store policy set" policy=shared
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465390367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465443699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465555213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465597957Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465634744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465705067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.465941498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466071120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466113283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466145023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466176156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466211240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466250495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466285590Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466317193Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466347259Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466376937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466407325Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466446395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466488362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466530329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466566314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466607503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466641823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466672212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466702609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466732812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466764575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466794248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466823748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466854140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466886668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466935305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.466981167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467011716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467066705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467101883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467131499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467160087Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467188157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467216598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467244211Z" level=info msg="NRI interface is disabled by configuration."
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467402488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467606858Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467674178Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 13 19:36:37 ha-224000-m04 dockerd[514]: time="2024-12-13T19:36:37.467711081Z" level=info msg="containerd successfully booted in 0.022287s"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.455600290Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.476104344Z" level=info msg="Loading containers: start."
	Dec 13 19:36:38 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:38.568941234Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.144331314Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.199597389Z" level=info msg="Loading containers: done."
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210939061Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210976128Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.210994749Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.211089971Z" level=info msg="Daemon has completed initialization"
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231136019Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 19:36:39 ha-224000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 13 19:36:39 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:39.231344731Z" level=info msg="API listen on [::]:2376"
	Dec 13 19:36:40 ha-224000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.277223387Z" level=info msg="Processing signal 'terminated'"
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278137307Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278251358Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278340377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 13 19:36:40 ha-224000-m04 dockerd[508]: time="2024-12-13T19:36:40.278256739Z" level=info msg="Daemon shutdown complete"
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 13 19:36:41 ha-224000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 13 19:36:41 ha-224000-m04 dockerd[1113]: time="2024-12-13T19:36:41.322763293Z" level=info msg="Starting up"
	Dec 13 19:37:41 ha-224000-m04 dockerd[1113]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 13 19:37:41 ha-224000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1213 11:37:41.225503    5233 out.go:270] * 
	W1213 11:37:41.226123    5233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:37:41.267588    5233 out.go:201] 
	
	
	==> Docker <==
	Dec 13 19:35:17 ha-224000 dockerd[1176]: time="2024-12-13T19:35:17.296092113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233837137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233911634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233925821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.233995450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239334702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239439690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239450304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:27 ha-224000 dockerd[1176]: time="2024-12-13T19:35:27.239575939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.205775306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207076446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207155526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:29 ha-224000 dockerd[1176]: time="2024-12-13T19:35:29.207356928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206616412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206773456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206817690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:30 ha-224000 dockerd[1176]: time="2024-12-13T19:35:30.206899370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457128150Z" level=info msg="shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1170]: time="2024-12-13T19:35:57.457607034Z" level=info msg="ignoring event" container=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457838474Z" level=warning msg="cleaning up after shim disconnected" id=813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3 namespace=moby
	Dec 13 19:35:57 ha-224000 dockerd[1176]: time="2024-12-13T19:35:57.457953841Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213145624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213212633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213225596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 13 19:36:42 ha-224000 dockerd[1176]: time="2024-12-13T19:36:42.213337090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b961eac98708b       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   93cd09024c535       storage-provisioner
	f1b285481948b       50415e5d05f05                                                                                         2 minutes ago        Running             kindnet-cni               1                   06f29a39c508a       kindnet-687js
	38ee6f8374b04       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   6ed2d05ea2409       busybox-7dff88458-wbknx
	5f565c400b733       505d571f5fd56                                                                                         2 minutes ago        Running             kube-proxy                1                   31cf2effc73d7       kube-proxy-9wj7k
	5050cecf942e2       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   645aca2ea936b       coredns-7c65d6cfc9-5ds6r
	df8ddf72aa14f       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   8cef794a507b6       coredns-7c65d6cfc9-sswfx
	dba699a298586       0486b6c53a1b5                                                                                         3 minutes ago        Running             kube-controller-manager   2                   da5d4e126c370       kube-controller-manager-ha-224000
	2c7e84811a057       9499c9960544e                                                                                         3 minutes ago        Running             kube-apiserver            2                   6651a1d0a89d4       kube-apiserver-ha-224000
	d34c8e7a98686       f1c87c24be687                                                                                         4 minutes ago        Running             kube-vip                  0                   53478f9b98c3e       kube-vip-ha-224000
	0457a6eb9fce4       9499c9960544e                                                                                         4 minutes ago        Exited              kube-apiserver            1                   6651a1d0a89d4       kube-apiserver-ha-224000
	78030050b83d7       2e96e5913fc06                                                                                         4 minutes ago        Running             etcd                      1                   48f05aec7d5f4       etcd-ha-224000
	8cce3a8cb1260       847c7bc1a5418                                                                                         4 minutes ago        Running             kube-scheduler            1                   d605ad9f8c9f5       kube-scheduler-ha-224000
	dda62d21c5c2f       0486b6c53a1b5                                                                                         4 minutes ago        Exited              kube-controller-manager   1                   da5d4e126c370       kube-controller-manager-ha-224000
	89334114a6e1e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago        Exited              busybox                   0                   ddc328d7180f5       busybox-7dff88458-wbknx
	cf4b333fe5f49       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   f18799b2271c7       coredns-7c65d6cfc9-sswfx
	f16805d6df5d4       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   653774da684e6       coredns-7c65d6cfc9-5ds6r
	532326a9b719a       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              11 minutes ago       Exited              kindnet-cni               0                   989ccdb8aa000       kindnet-687js
	94480a2dd9b5e       505d571f5fd56                                                                                         11 minutes ago       Exited              kube-proxy                0                   1cd5ef5ffe1e4       kube-proxy-9wj7k
	ad0dc00c3676d       2e96e5913fc06                                                                                         11 minutes ago       Exited              etcd                      0                   6121511eb160b       etcd-ha-224000
	63c39e011231f       847c7bc1a5418                                                                                         11 minutes ago       Exited              kube-scheduler            0                   2046a92fb05bb       kube-scheduler-ha-224000
	
	
	==> coredns [5050cecf942e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:39218 - 50752 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001691935s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:35345->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41938 - 7905 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 6.001636827s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:38380->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:41437 - 45110 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.001832207s
	[INFO] 127.0.0.1:44515 - 54662 "HINFO IN 2774560578117609647.1532570917481937419. udp 57 false 512" - - 0 4.002458371s
	[ERROR] plugin/errors: 2 2774560578117609647.1532570917481937419. HINFO: read udp 10.244.0.4:41265->192.169.0.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[446765318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30005ms):
	Trace[446765318]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[446765318]: [30.005577524s] [30.005577524s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[393764073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30006ms):
	Trace[393764073]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:35:47.544)
	Trace[393764073]: [30.006232941s] [30.006232941s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[531717446]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.543) (total time: 30002ms):
	Trace[531717446]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:35:47.544)
	Trace[531717446]: [30.002274294s] [30.002274294s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [cf4b333fe5f4] <==
	[INFO] 10.244.2.2:52684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320449s
	[INFO] 10.244.2.2:56489 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.010940453s
	[INFO] 10.244.2.2:53656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010500029s
	[INFO] 10.244.1.2:40275 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235614s
	[INFO] 10.244.0.4:54501 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000070742s
	[INFO] 10.244.2.2:54661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099137s
	[INFO] 10.244.2.2:53526 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010894436s
	[INFO] 10.244.2.2:43837 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093129s
	[INFO] 10.244.2.2:48144 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01305588s
	[INFO] 10.244.2.2:37929 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083719s
	[INFO] 10.244.2.2:56915 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109123s
	[INFO] 10.244.2.2:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064664s
	[INFO] 10.244.1.2:36673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000091432s
	[INFO] 10.244.1.2:34220 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009472s
	[INFO] 10.244.1.2:38397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007902s
	[INFO] 10.244.0.4:44003 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000090711s
	[INFO] 10.244.0.4:37919 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060032s
	[INFO] 10.244.0.4:57710 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104441s
	[INFO] 10.244.2.2:36812 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000142147s
	[INFO] 10.244.1.2:43077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013892s
	[INFO] 10.244.0.4:44480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107424s
	[INFO] 10.244.0.4:50392 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013146s
	[INFO] 10.244.0.4:57954 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090837s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df8ddf72aa14] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35560 - 57542 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003265442s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:57849->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:36876 - 8169 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 2.001203837s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:33115->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:55518 - 55981 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" - - 0 6.003381935s
	[ERROR] plugin/errors: 2 7691483522066365998.6584771563269026758. HINFO: read udp 10.244.0.3:35637->192.169.0.1:53: i/o timeout
	[INFO] 127.0.0.1:51113 - 20297 "HINFO IN 7691483522066365998.6584771563269026758. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.000906393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469351415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30002ms):
	Trace[469351415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:35:47.541)
	Trace[469351415]: [30.002900538s] [30.002900538s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[235804559]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.539) (total time: 30004ms):
	Trace[235804559]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:35:47.543)
	Trace[235804559]: [30.004014569s] [30.004014569s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222840766]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Dec-2024 19:35:17.542) (total time: 30002ms):
	Trace[222840766]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:35:47.544)
	Trace[222840766]: [30.002499147s] [30.002499147s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f16805d6df5d] <==
	[INFO] 10.244.0.4:50423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616257s
	[INFO] 10.244.0.4:51571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066308s
	[INFO] 10.244.0.4:55425 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000034221s
	[INFO] 10.244.0.4:33674 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091937s
	[INFO] 10.244.0.4:60931 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037068s
	[INFO] 10.244.2.2:51638 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103452s
	[INFO] 10.244.2.2:33033 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088733s
	[INFO] 10.244.2.2:51032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145099s
	[INFO] 10.244.2.2:58035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067066s
	[INFO] 10.244.1.2:35671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137338s
	[INFO] 10.244.1.2:43244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083679s
	[INFO] 10.244.1.2:49096 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008999s
	[INFO] 10.244.1.2:50254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108638s
	[INFO] 10.244.0.4:50170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091228s
	[INFO] 10.244.0.4:60410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158647s
	[INFO] 10.244.0.4:51342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086722s
	[INFO] 10.244.2.2:37837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076855s
	[INFO] 10.244.2.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100477s
	[INFO] 10.244.2.2:48539 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006865s
	[INFO] 10.244.1.2:34571 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102259s
	[INFO] 10.244.1.2:48156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010558s
	[INFO] 10.244.1.2:56382 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000094051s
	[INFO] 10.244.0.4:56589 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000045096s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-224000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T11_26_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:26:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:38 +0000   Fri, 13 Dec 2024 19:26:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-224000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c482b8662654c3a869b1ecefe5cf9ee
	  System UUID:                b2cf45fe-0000-0000-a947-282a845e5503
	  Boot ID:                    a3b32e80-0a2c-43a6-967b-82a2f6e8eef5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wbknx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  kube-system                 coredns-7c65d6cfc9-5ds6r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-7c65d6cfc9-sswfx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-ha-224000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-687js                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-224000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-224000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9wj7k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-224000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-224000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 2m31s                  kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-224000 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           9m21s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-224000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-224000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node ha-224000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-224000 event: Registered Node ha-224000 in Controller
	
	
	Name:               ha-224000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_27_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:27:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:37:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:34:33 +0000   Fri, 13 Dec 2024 19:27:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-224000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a69af53a722464e92c469155271604e
	  System UUID:                573e4bce-0000-0000-aba3-b379863bb495
	  Boot ID:                    ae7bc928-29f4-4c6b-bd14-f4e659fc8097
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l97s5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  kube-system                 etcd-ha-224000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-c6kgd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-224000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-224000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9wsr4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-224000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-224000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m23s                  kube-proxy       
	  Normal   Starting                 5m20s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 5m24s                  kubelet          Node ha-224000-m02 has been rebooted, boot id: 77378fb8-5f4b-4218-9a14-15ce228529ff
	  Normal   NodeHasSufficientMemory  5m24s                  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m24s                  kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m24s                  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m17s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m34s (x8 over 3m35s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m34s (x8 over 3m35s)  kubelet          Node ha-224000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m34s (x7 over 3m35s)  kubelet          Node ha-224000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m22s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           3m22s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	  Normal   RegisteredNode           2m16s                  node-controller  Node ha-224000-m02 event: Registered Node ha-224000-m02 in Controller
	
	
	Name:               ha-224000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-224000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=ha-224000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_13T11_31_24_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:31:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-224000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:32:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 13 Dec 2024 19:31:54 +0000   Fri, 13 Dec 2024 19:35:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-224000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e9882ffc62647968bea651d5ce1f097
	  System UUID:                3aa246e2-0000-0000-9534-1f9a2dff1012
	  Boot ID:                    0f3125e8-e3e0-4806-91cb-fd0eaa4f608f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-g6ss2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-proxy-7b8ch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node ha-224000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node ha-224000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeReady                6m12s                  kubelet          Node ha-224000-m04 status is now: NodeReady
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-224000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-224000-m04 event: Registered Node ha-224000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.035991] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008030] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.835151] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.809793] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.216222] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.358309] systemd-fstab-generator[460]: Ignoring "noauto" option for root device
	[  +0.105099] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +1.959406] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +0.254010] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +0.104125] systemd-fstab-generator[1148]: Ignoring "noauto" option for root device
	[  +0.104856] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.058611] kauditd_printk_skb: 149 callbacks suppressed
	[  +2.414891] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.103198] systemd-fstab-generator[1400]: Ignoring "noauto" option for root device
	[  +0.113797] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[  +0.119494] systemd-fstab-generator[1427]: Ignoring "noauto" option for root device
	[  +0.429719] systemd-fstab-generator[1587]: Ignoring "noauto" option for root device
	[  +6.882724] kauditd_printk_skb: 172 callbacks suppressed
	[Dec13 19:34] kauditd_printk_skb: 40 callbacks suppressed
	[Dec13 19:35] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.801033] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [78030050b83d] <==
	{"level":"info","ts":"2024-12-13T19:35:36.914506Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.915577Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.968970Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-13T19:35:36.969147Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:35:36.970728Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e397b3b47bd62ab9","to":"afd89b9ec393451","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-13T19:35:36.970799Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.630337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e397b3b47bd62ab9 switched to configuration voters=(7605335155526620764 16399774155846068921)"}
	{"level":"info","ts":"2024-12-13T19:37:49.631543Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7182ce703fa4d8d4","local-member-id":"e397b3b47bd62ab9","removed-remote-peer-id":"afd89b9ec393451","removed-remote-peer-urls":["https://192.169.0.8:2380"]}
	{"level":"info","ts":"2024-12-13T19:37:49.631741Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.632018Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.632125Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.631909Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"e397b3b47bd62ab9","removed-member-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.632947Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-12-13T19:37:49.633407Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.633562Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.633738Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.633916Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451","error":"context canceled"}
	{"level":"warn","ts":"2024-12-13T19:37:49.634028Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"afd89b9ec393451","error":"failed to read afd89b9ec393451 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-12-13T19:37:49.634104Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.634388Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451","error":"context canceled"}
	{"level":"info","ts":"2024-12-13T19:37:49.634446Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.634469Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:37:49.634519Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"e397b3b47bd62ab9","removed-remote-peer-id":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.640548Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e397b3b47bd62ab9","remote-peer-id-stream-handler":"e397b3b47bd62ab9","remote-peer-id-from":"afd89b9ec393451"}
	{"level":"warn","ts":"2024-12-13T19:37:49.644460Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e397b3b47bd62ab9","remote-peer-id-stream-handler":"e397b3b47bd62ab9","remote-peer-id-from":"afd89b9ec393451"}
	
	
	==> etcd [ad0dc00c3676] <==
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.919286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"911.52519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-13T19:33:15.919296Z","caller":"traceutil/trace.go:171","msg":"trace[646065576] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"911.536819ms","start":"2024-12-13T19:33:15.007757Z","end":"2024-12-13T19:33:15.919293Z","steps":["trace[646065576] 'agreement among raft nodes before linearized reading'  (duration: 911.525741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:33:15.919307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:33:15.007742Z","time spent":"911.561075ms","remote":"127.0.0.1:57240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/12/13 19:33:15 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-13T19:33:15.953693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-13T19:33:15.953754Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-13T19:33:15.953797Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"e397b3b47bd62ab9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-13T19:33:15.956144Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956235Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956328Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956354Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956412Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956443Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"698b940776f4565c"}
	{"level":"info","ts":"2024-12-13T19:33:15.956450Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956468Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956907Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.956957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957005Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e397b3b47bd62ab9","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.957016Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"afd89b9ec393451"}
	{"level":"info","ts":"2024-12-13T19:33:15.960175Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960341Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.6:2380"}
	{"level":"info","ts":"2024-12-13T19:33:15.960352Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-224000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.6:2380"],"advertise-client-urls":["https://192.169.0.6:2379"]}
	
	
	==> kernel <==
	 19:37:59 up 4 min,  0 users,  load average: 0.54, 0.41, 0.20
	Linux ha-224000 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [532326a9b719] <==
	I1213 19:32:38.955729       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.951745       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:48.951937       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:48.952237       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:48.952297       1 main.go:301] handling current node
	I1213 19:32:48.952312       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:48.952320       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:32:48.952519       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:48.952573       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.952815       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:32:58.952836       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:32:58.953197       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:32:58.953257       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:32:58.953413       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:32:58.953484       1 main.go:301] handling current node
	I1213 19:32:58.953506       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:32:58.953519       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.953874       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:33:08.953928       1 main.go:301] handling current node
	I1213 19:33:08.954191       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:33:08.954234       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:33:08.955460       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:33:08.955468       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:33:08.955667       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:33:08.955695       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f1b285481948] <==
	I1213 19:37:21.245123       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:21.245378       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:21.245522       1 main.go:301] handling current node
	I1213 19:37:31.243688       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:31.243758       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:31.243918       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:31.244043       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:31.244392       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:31.244432       1 main.go:301] handling current node
	I1213 19:37:31.244443       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:31.244449       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.249106       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:41.249448       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	I1213 19:37:41.249978       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:41.250111       1 main.go:301] handling current node
	I1213 19:37:41.250163       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:41.250282       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:41.250439       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1213 19:37:41.250519       1 main.go:324] Node ha-224000-m03 has CIDR [10.244.2.0/24] 
	I1213 19:37:51.243452       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1213 19:37:51.243568       1 main.go:301] handling current node
	I1213 19:37:51.243598       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1213 19:37:51.243617       1 main.go:324] Node ha-224000-m02 has CIDR [10.244.1.0/24] 
	I1213 19:37:51.243864       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1213 19:37:51.243930       1 main.go:324] Node ha-224000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0457a6eb9fce] <==
	I1213 19:33:49.820720       1 options.go:228] external host was not specified, using 192.169.0.6
	I1213 19:33:49.826974       1 server.go:142] Version: v1.31.2
	I1213 19:33:49.828876       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.369348       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1213 19:33:50.373560       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:33:50.376229       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1213 19:33:50.376292       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1213 19:33:50.376453       1 instance.go:232] Using reconciler: lease
	W1213 19:34:10.367496       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1213 19:34:10.367678       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1213 19:34:10.377527       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2c7e84811a05] <==
	I1213 19:34:33.858755       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1213 19:34:33.858846       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1213 19:34:33.932383       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1213 19:34:33.934311       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 19:34:33.944721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 19:34:33.944939       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 19:34:33.945156       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 19:34:33.945214       1 policy_source.go:224] refreshing policies
	I1213 19:34:33.946446       1 shared_informer.go:320] Caches are synced for configmaps
	I1213 19:34:33.950262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 19:34:33.950654       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 19:34:33.952135       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1213 19:34:33.958706       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1213 19:34:33.958952       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1213 19:34:33.959051       1 aggregator.go:171] initial CRD sync complete...
	I1213 19:34:33.959071       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 19:34:33.959175       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 19:34:33.959196       1 cache.go:39] Caches are synced for autoregister controller
	W1213 19:34:33.972653       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1213 19:34:33.974278       1 controller.go:615] quota admission added evaluator for: endpoints
	I1213 19:34:33.985761       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1213 19:34:33.990131       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1213 19:34:34.005835       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 19:34:34.842581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1213 19:34:35.103753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	
	
	==> kube-controller-manager [dba699a29858] <==
	I1213 19:35:55.552893       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.553121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.725541ms"
	I1213 19:35:55.553280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.548µs"
	I1213 19:35:55.553635       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.571600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.492248ms"
	I1213 19:35:55.576690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.23µs"
	I1213 19:35:55.577745       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9khgk\": the object has been modified; please apply your changes to the latest version and try again"
	I1213 19:35:55.578045       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"62fdbc68-3cb2-4c62-84a6-34ec3a6b8454", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9khgk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9khgk": the object has been modified; please apply your changes to the latest version and try again
	I1213 19:35:55.625981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="11.797733ms"
	I1213 19:35:55.626922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.294µs"
	I1213 19:37:46.369030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:37:46.381408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	I1213 19:37:46.541953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.510127ms"
	I1213 19:37:46.542674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.239µs"
	I1213 19:37:48.552936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.345µs"
	I1213 19:37:49.216749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.583µs"
	I1213 19:37:49.219502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.65µs"
	I1213 19:37:50.388977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-224000-m03"
	E1213 19:37:50.419561       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-224000-m03\", UID:\"dbfd547b-46b2-4d01-b5ad-c13202bbbb2d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-224000-m03\", UID:\"5f2128c5-ecb0-4494-b745-3548943f47df\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-224000-m03\" not found" logger="UnhandledError"
	E1213 19:37:50.420034       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-224000-m03\", UID:\"e099dcf0-e130-4edd-882b-188b4e85113b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-224000-m03\", UID:\"5f2128c5-ecb0-4494-b745-3548943f47df\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-224000-m03\" not found" logger="UnhandledError"
	E1213 19:37:57.423965       1 gc_controller.go:151] "Failed to get node" err="node \"ha-224000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-224000-m03"
	E1213 19:37:57.424009       1 gc_controller.go:151] "Failed to get node" err="node \"ha-224000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-224000-m03"
	E1213 19:37:57.424017       1 gc_controller.go:151] "Failed to get node" err="node \"ha-224000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-224000-m03"
	E1213 19:37:57.424021       1 gc_controller.go:151] "Failed to get node" err="node \"ha-224000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-224000-m03"
	E1213 19:37:57.424025       1 gc_controller.go:151] "Failed to get node" err="node \"ha-224000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-224000-m03"
	
	
	==> kube-controller-manager [dda62d21c5c2] <==
	I1213 19:33:49.641671       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:33:50.338076       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1213 19:33:50.338108       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:33:50.340327       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:33:50.340428       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:33:50.340697       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1213 19:33:50.340882       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:34:11.384884       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.6:8443/healthz\": dial tcp 192.169.0.6:8443: connect: connection refused"
	
	
	==> kube-proxy [5f565c400b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:35:27.545116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:35:27.561280       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:35:27.561547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:35:27.593343       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:35:27.593524       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:35:27.593695       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:35:27.599613       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:35:27.600762       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:35:27.600792       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:35:27.603008       1 config.go:199] "Starting service config controller"
	I1213 19:35:27.603210       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:35:27.603407       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:35:27.603433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:35:27.604612       1 config.go:328] "Starting node config controller"
	I1213 19:35:27.604643       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:35:27.704590       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:35:27.704694       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:35:27.704710       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [94480a2dd9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:26:14.203354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:26:14.213097       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.6"]
	E1213 19:26:14.213174       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:26:14.241202       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:26:14.241246       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:26:14.241263       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:26:14.244275       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:26:14.244855       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:26:14.244882       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:26:14.246052       1 config.go:199] "Starting service config controller"
	I1213 19:26:14.246200       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:26:14.246348       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:26:14.246374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:26:14.246424       1 config.go:328] "Starting node config controller"
	I1213 19:26:14.246441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:26:14.347309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:26:14.347360       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:26:14.347669       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [63c39e011231] <==
	E1213 19:28:30.473242       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	E1213 19:28:30.474646       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d5770b31-991f-43c2-82a4-f0051e25f645(kube-system/kindnet-kpjh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.474870       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4b9ed970-5ad3-4b15-a714-24f0f06632c8(kube-system/kube-proxy-gmw9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gmw9z"
	E1213 19:28:30.475888       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kpjh5\": pod kindnet-kpjh5 is already assigned to node \"ha-224000-m03\"" pod="kube-system/kindnet-kpjh5"
	E1213 19:28:30.476671       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jxwhq\": pod kube-proxy-jxwhq is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-jxwhq"
	I1213 19:28:30.476729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jxwhq" node="ha-224000-m03"
	I1213 19:28:30.475988       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kpjh5" node="ha-224000-m03"
	E1213 19:28:30.475897       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gmw9z\": pod kube-proxy-gmw9z is already assigned to node \"ha-224000-m03\"" pod="kube-system/kube-proxy-gmw9z"
	I1213 19:28:30.478106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gmw9z" node="ha-224000-m03"
	E1213 19:28:59.957880       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	E1213 19:28:59.957902       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod eaf3a368-16e9-43ba-ae1f-1ddc94ef233e(default/busybox-7dff88458-l97s5) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-l97s5"
	I1213 19:28:59.957915       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-l97s5" node="ha-224000-m02"
	E1213 19:29:00.063963       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-zs25q is already present in the active queue" pod="default/busybox-7dff88458-zs25q"
	E1213 19:29:00.081842       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-zs25q\" not found" pod="default/busybox-7dff88458-zs25q"
	E1213 19:31:24.582665       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	E1213 19:31:24.582727       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7b8ch\": pod kube-proxy-7b8ch is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-7b8ch"
	E1213 19:31:24.582830       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8ccp4" node="ha-224000-m04"
	E1213 19:31:24.582939       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8ccp4\": pod kube-proxy-8ccp4 is already assigned to node \"ha-224000-m04\"" pod="kube-system/kube-proxy-8ccp4"
	E1213 19:31:24.583359       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qqm9r" node="ha-224000-m04"
	E1213 19:31:24.583404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qqm9r\": pod kindnet-qqm9r is already assigned to node \"ha-224000-m04\"" pod="kube-system/kindnet-qqm9r"
	I1213 19:31:24.586044       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7b8ch" node="ha-224000-m04"
	I1213 19:33:15.853518       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 19:33:15.859188       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:33:15.859357       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1213 19:33:15.864811       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8cce3a8cb126] <==
	E1213 19:34:33.927009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:34:33.927159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:34:33.927384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.927452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:34:33.927490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.929630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929845       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 19:34:33.929886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.929952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:34:33.930195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 19:34:33.930473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:34:33.930610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 19:34:33.930722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:34:33.930989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:34:33.931026       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1213 19:34:55.098739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1213 19:37:46.507664       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-n5j7r\" not found" pod="default/busybox-7dff88458-n5j7r"
	
	
	==> kubelet <==
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:35:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:35:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:35:42 ha-224000 kubelet[1594]: I1213 19:35:42.186925    1594 scope.go:117] "RemoveContainer" containerID="901560cab05afd01ac1f97679993cf515730a563066592c72d364d4f023faa11"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.639988    1594 scope.go:117] "RemoveContainer" containerID="6e865c58301353a95a17f9b7cc0efd9f449785d4fa6d23de4eae2d1f5ef7aa69"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: I1213 19:35:57.640662    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:35:57 ha-224000 kubelet[1594]: E1213 19:35:57.640842    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: I1213 19:36:09.158547    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:09 ha-224000 kubelet[1594]: E1213 19:36:09.158675    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: I1213 19:36:20.159152    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:20 ha-224000 kubelet[1594]: E1213 19:36:20.159302    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: I1213 19:36:31.158111    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:31 ha-224000 kubelet[1594]: E1213 19:36:31.158349    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b3bd2963-cd6d-462d-9162-3ac606e91850)\"" pod="kube-system/storage-provisioner" podUID="b3bd2963-cd6d-462d-9162-3ac606e91850"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.158392    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: E1213 19:36:42.198509    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:36:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:36:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:36:42 ha-224000 kubelet[1594]: I1213 19:36:42.216134    1594 scope.go:117] "RemoveContainer" containerID="813406d565c19a4dfed3526b6d47048c46e127b395f4d271632a73ad683f44a3"
	Dec 13 19:37:42 ha-224000 kubelet[1594]: E1213 19:37:42.172559    1594 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:37:42 ha-224000 kubelet[1594]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:37:42 ha-224000 kubelet[1594]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-224000 -n ha-224000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-224000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-9j5jp
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-224000 describe pod busybox-7dff88458-9j5jp
helpers_test.go:282: (dbg) kubectl --context ha-224000 describe pod busybox-7dff88458-9j5jp:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-9j5jp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-55x6l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-55x6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (137.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-410000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1213 11:46:45.198080    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:48:42.128203    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-410000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.989113823s)

                                                
                                                
-- stdout --
	* [mount-start-1-410000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-410000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-410000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:0d:50:f5:23:11
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-410000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:0c:20:3e:49:5c
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:0c:20:3e:49:5c
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-410000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-410000 -n mount-start-1-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-410000 -n mount-start-1-410000: exit status 7 (101.929435ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:48:50.538161    6065 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 11:48:50.538184    6065 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-410000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (137.09s)

                                                
                                    
x
+
TestScheduledStopUnix (142.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-605000 --memory=2048 --driver=hyperkit 
E1213 12:02:22.576110    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-605000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.742711804s)

                                                
                                                
-- stdout --
	* [scheduled-stop-605000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-605000" primary control-plane node in "scheduled-stop-605000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-605000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:46:00:75:24:2a
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-605000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:a2:e1:0a:8b:be
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:a2:e1:0a:8b:be
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-605000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-605000" primary control-plane node in "scheduled-stop-605000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-605000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:46:00:75:24:2a
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-605000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:a2:e1:0a:8b:be
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:a2:e1:0a:8b:be
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-13 12:03:22.06088 -0800 PST m=+3650.060822897
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-605000 -n scheduled-stop-605000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-605000 -n scheduled-stop-605000: exit status 7 (102.485796ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:03:22.161454    7276 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:03:22.161479    7276 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-605000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-605000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-605000
E1213 12:03:25.213979    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-605000: (5.270692034s)
--- FAIL: TestScheduledStopUnix (142.12s)

                                                
                                    
x
+
TestPause/serial/Start (141.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.097561784s)

                                                
                                                
-- stdout --
	* [pause-607000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-607000" primary control-plane node in "pause-607000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-607000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:c2:ba:21:7b:02
	* Failed to start hyperkit VM. Running "minikube delete -p pause-607000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:c4:d8:8d:8c:56
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:c4:d8:8d:8c:56
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-607000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-607000 -n pause-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-607000 -n pause-607000: exit status 7 (109.725168ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:44:26.412248    9728 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1213 12:44:26.412273    9728 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-607000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (7201.763s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-411000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.2
E1213 13:01:32.360743    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/false-006000/client.crt: no such file or directory" logger="UnhandledError"
E1213 13:01:36.580176    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/kubenet-006000/client.crt: no such file or directory" logger="UnhandledError"
E1213 13:01:57.096400    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/custom-flannel-006000/client.crt: no such file or directory" logger="UnhandledError"
E1213 13:01:58.257327    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/flannel-006000/client.crt: no such file or directory" logger="UnhandledError"
E1213 13:02:24.817372    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/custom-flannel-006000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (57m9s)
		TestNetworkPlugins/group (6m52s)
		TestStartStop (18m39s)
		TestStartStop/group/embed-certs (1m19s)
		TestStartStop/group/embed-certs/serial (1m19s)
		TestStartStop/group/embed-certs/serial/FirstStart (1m19s)
		TestStartStop/group/old-k8s-version (9m0s)
		TestStartStop/group/old-k8s-version/serial (9m0s)
		TestStartStop/group/old-k8s-version/serial/SecondStart (6m20s)

                                                
                                                
goroutine 3951 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0009cc340, 0xc000091bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000798090, {0x115d9400, 0x2a, 0x2a}, {0xc7934d6?, 0xffffffffffffffff?, 0x11600320?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000a7a0a0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000a7a0a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000800d00)
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 154 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0007d8a50, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000984d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007d8ac0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a82010, {0xfc9b640, 0xc000cf40f0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a82010, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2741 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2736
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2703 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0009c5400)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001470ea0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001470ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001470ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001470ea0, 0xc000a2e300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3018 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0007def50, 0xc0007def98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xb0?, 0xc0007def50, 0xc0007def98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0x61f798af13d66a74?, 0xf6189ccd113013f2?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014b77d0?, 0xc910344?, 0xc00008b3b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3025
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 143 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 142
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3116 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0014b6f50, 0xc0014b6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x10?, 0xc0014b6f50, 0xc0014b6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc0009cc4e0?, 0xc8d1420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014b6fd0?, 0xc910344?, 0xc0022a66c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3122
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2701 [chan receive, 9 minutes]:
testing.(*T).Run(0xc001470680, {0xe76ef84?, 0x0?}, 0xc000800080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001470680)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc001470680, 0xc000a2e280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 156 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 155
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 155 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc000987f50, 0xc000987f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xd0?, 0xc000987f50, 0xc000987f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc0009cc820?, 0xc8d1420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc9102e5?, 0xc00003a480?, 0xc00008b2d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 144 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007d8ac0, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 142
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 738 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0x59343e68, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00011fd00?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00011fd00)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00011fd00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001f1a4c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001f1a4c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000255e00, {0xfcc7140, 0xc001f1a4c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000255e00)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc001860820?, 0xc001860820)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 735
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3115 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001933710, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007e0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001933740)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021f2880, {0xfc9b640, 0xc0015a6ae0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021f2880, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3122
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3121 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3355 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3351
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3258 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc001b48750, 0xc001b48798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xb0?, 0xc001b48750, 0xc001b48798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0x30333a2273646e6f?, 0x79656b227b2c7d30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc9102e5?, 0xc0017f8600?, 0xc0018585b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3243
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3479 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001932410, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00146fd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001932440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007c1270, {0xfc9b640, 0xc001ec26c0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007c1270, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3453
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3966 [IO wait]:
internal/poll.runtime_pollWait(0x593442c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001d2f6e0?, 0xc001a11b2c?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d2f6e0, {0xc001a11b2c, 0x1a4d4, 0x1a4d4})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022c2430, {0xc001a11b2c?, 0xc000509d38?, 0x1fe33?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001cf26c0, {0xfc99a88, 0xc001c9ca40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xfc99c20, 0xc001cf26c0}, {0xfc99a88, 0xc001c9ca40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc7934d6?, {0xfc99c20, 0xc001cf26c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000509ea8?, {0xfc99c20?, 0xc001cf26c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xfc99c20, 0xc001cf26c0}, {0xfc99b80, 0xc0022c2430}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0009cc680?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3964
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3723 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3719
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3452 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3475
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3242 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3238
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3372 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3371
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3370 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0009d09d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00146cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009d0a00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006e0e70, {0xfc9b640, 0xc000852930}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006e0e70, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3711 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0014b6750, 0xc0014b6798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x80?, 0xc0014b6750, 0xc0014b6798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xcd69ed6?, 0xc00003ad80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014b67d0?, 0xc910344?, 0xc001e77880?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3879 [IO wait]:
internal/poll.runtime_pollWait(0x59343b20, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00082b0e0?, 0xc0016604da?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00082b0e0, {0xc0016604da, 0x11b26, 0x11b26})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022c2880, {0xc0016604da?, 0xc001b44d50?, 0x1fe19?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001518db0, {0xfc99a88, 0xc001c9ce58})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xfc99c20, 0xc001518db0}, {0xfc99a88, 0xc001c9ce58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b44e78?, {0xfc99c20, 0xc001518db0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001b44f38?, {0xfc99c20?, 0xc001518db0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xfc99c20, 0xc001518db0}, {0xfc99b80, 0xc0022c2880}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc002103420?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3877
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3117 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3116
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2166 [chan receive, 58 minutes]:
testing.(*T).Run(0xc0000231e0, {0xe76dbaa?, 0x328f2f34137?}, 0xc0015214b8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0000231e0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0000231e0, 0xfc8c858)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3878 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x59344098, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00082b020?, 0xc0018554d5?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00082b020, {0xc0018554d5, 0x32b, 0x32b})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022c2868, {0xc0018554d5?, 0x593d3898?, 0x263?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001518d80, {0xfc99a88, 0xc001c9ce50})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xfc99c20, 0xc001518d80}, {0xfc99a88, 0xc001c9ce50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x114b5d60?, {0xfc99c20, 0xc001518d80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf?, {0xfc99c20?, 0xc001518d80?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xfc99c20, 0xc001518d80}, {0xfc99b80, 0xc0022c2868}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc002024200?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3877
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3592 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3591
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3580 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000abe900, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3578
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2742 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a2e940, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2736
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3965 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x59343d50, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001d2f620?, 0xc000aed3a7?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d2f620, {0xc000aed3a7, 0x459, 0x459})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022c2418, {0xc000aed3a7?, 0xc00155fd48?, 0x230?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001cf2690, {0xfc99a88, 0xc001c9ca38})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xfc99c20, 0xc001cf2690}, {0xfc99a88, 0xc001c9ca38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x1000000114b5d60?, {0xfc99c20, 0xc001cf2690})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf?, {0xfc99c20?, 0xc001cf2690?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0xfc99c20, 0xc001cf2690}, {0xfc99b80, 0xc0022c2418}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc002024a80?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3964
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 1398 [select, 97 minutes]:
net/http.(*persistConn).writeLoop(0xc0006f8240)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1414
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2758 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2757
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1362 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc000234a80, 0xc0021f85b0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 850
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3122 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001933740, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2700 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0014701a0, 0xfc8ca18)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2252
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3967 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019c2c00, 0xc002102f50)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3964
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3724 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020dee00, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3719
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2221 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2220
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3453 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001932440, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3475
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1257 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc001f65800, 0xc002102fc0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1256
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3480 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0007f1750, 0xc0007f1798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xa0?, 0xc0007f1750, 0xc0007f1798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc0009cda00?, 0xc8d1420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007f17d0?, 0xc910344?, 0xc001b77680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3453
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1310 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc00216fc80, 0xc0021f8e00)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1309
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3371 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0000b9750, 0xc0000b9798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xf0?, 0xc0000b9750, 0xc0000b9798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xcd69ed6?, 0xc00182cf00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b97d0?, 0xc910344?, 0xc0018583f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 970 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 863
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 971 [chan receive, 98 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006cd400, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 863
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3696 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0009cc1a0, {0xe779ede?, 0xc00023bdc0?}, 0xc002024200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0009cc1a0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0009cc1a0, 0xc000800080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2701
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2219 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006cc410, 0x1e)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000986d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006cc480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00075ce50, {0xfc9b640, 0xc0014d42d0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00075ce50, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2239
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3880 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019c2a80, 0xc0021035e0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3877
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2842 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0019322d0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014c7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001932300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006e2030, {0xfc9b640, 0xc001500030}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006e2030, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2854
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3019 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3018
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 979 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 978
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1111 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019c2300, 0xc001858f50)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1110
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 978 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0000b8f50, 0xc001fecf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x30?, 0xc0000b8f50, 0xc0000b8f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc001860820?, 0xc8d1420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b8fd0?, 0xc910344?, 0xc00092e930?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1397 [select, 97 minutes]:
net/http.(*persistConn).readLoop(0xc0006f8240)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1414
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2239 [chan receive, 58 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006cc480, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2205
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 977 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0006cd3d0, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001fe7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006cd400)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007b2a30, {0xfc9b640, 0xc0015c40c0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007b2a30, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3025 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006cc9c0, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2991
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2757 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0007f6f50, 0xc0007f6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0xc0?, 0xc0007f6f50, 0xc0007f6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc000023601?, 0xc00008a850?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007f6fd0?, 0xc910344?, 0xc0021021c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2742
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3710 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0020dedd0, 0xf)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0006d8d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020dee00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021f3390, {0xfc9b640, 0xc002069530}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021f3390, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3259 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3258
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3243 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000abe800, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3238
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3481 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3480
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2220 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0007f3f50, 0xc001fe8f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x90?, 0xc0007f3f50, 0xc0007f3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xc001cfc340?, 0xc8d1420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007f3fd0?, 0xc910344?, 0xc00008ad90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2239
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3257 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000abe7d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014ccd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000abe800)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019b85f0, {0xfc9b640, 0xc001cf28a0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019b85f0, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3243
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2265 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0009c5400)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc000023d40, 0xc0015214b8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2166
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3356 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009d0a00, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3351
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2238 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2205
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2252 [chan receive, 20 minutes]:
testing.(*T).Run(0xc001470000, {0xe76dbaa?, 0xc8d0b13?}, 0xfc8ca18)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001470000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001470000, 0xfc8c8a0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2844 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2843
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2702 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0009c5400)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001470d00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001470d00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001470d00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001470d00, 0xc000a2e2c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2721 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0009c5400)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0014711e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014711e0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014711e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014711e0, 0xc000a2e3c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3017 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0006cc990, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000989d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006cc9c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007621f0, {0xfc9b640, 0xc0015c4120}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007621f0, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3025
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2756 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a2e910, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007dfd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a2e940)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00197c770, {0xfc9b640, 0xc0014b05d0}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00197c770, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2742
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3590 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000abe8d0, 0x10)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0006dfd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000abe900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000763090, {0xfc9b640, 0xc0014b0a80}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000763090, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3580
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3812 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001932580, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3833
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2722 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001471380, {0xe76ef84?, 0x0?}, 0xc002024a00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001471380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc001471380, 0xc000a2e440)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3579 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3578
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2992 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2991
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3591 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc0014b9750, 0xc0014b9798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x0?, 0xc0014b9750, 0xc0014b9798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0xcd69ed6?, 0xc00191ed80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014b97d0?, 0xc910344?, 0xc0006cd440?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3580
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2843 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc000983f50, 0xc000983f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x90?, 0xc000983f50, 0xc000983f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001b477d0?, 0xc910344?, 0xc0019a8780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2854
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3712 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3711
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3838 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc001560750, 0xc001560798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x0?, 0xc001560750, 0xc001560798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3812
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2853 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2852
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3811 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3833
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3866 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b42940, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3881
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2854 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001932300, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2852
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3877 [syscall, 7 minutes]:
syscall.syscall6(0x59217648?, 0x90?, 0xc001feabf8?, 0x123465b8?, 0x90?, 0x100000c798fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc001feabb8?, 0xc794ac5?, 0x90?, 0xfc018e0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc000440460?, 0xc001feabec, 0xc0022d6a98?, 0xc0021f37a0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc00223fdc0)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0xc7df419?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0019c2a80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0019c2a80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0009ccd00, 0xc0019c2a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0xfcd3ee8, 0xc0004e3570}, 0xc0009ccd00, {0xc001fc62a0, 0x16}, {0x2b85fba801b45758?, 0xc001b45760?}, {0xc8d0b13?, 0xc8312af?}, {0xc001e73680, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0009ccd00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc0009ccd00, 0xc002024200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3696
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3865 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xfcca480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3881
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3963 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0009cc9c0, {0xe778052?, 0xc0020c8e00?}, 0xc002024a80)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0009cc9c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0009cc9c0, 0xc002024a00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2722
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3837 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001932550, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000cfdd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001932580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000848b60, {0xfc9b640, 0xc00098eb10}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000848b60, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3812
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3839 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3838
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3885 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001b42910, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000506580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xfcef160)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b42940)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019b8890, {0xfc9b640, 0xc001518960}, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019b8890, 0x3b9aca00, 0x0, 0x1, 0xc00008a850)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3866
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3886 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xfcd41f0, 0xc00008a850}, 0xc001b4a750, 0xc001b4a798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xfcd41f0, 0xc00008a850}, 0x0?, 0xc001b4a750, 0xc001b4a798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xfcd41f0?, 0xc00008a850?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xcdf5fe5?, 0xc001bf6840?, 0xfcca480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3866
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3887 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3886
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3964 [syscall, 2 minutes]:
syscall.syscall6(0x59049128?, 0x90?, 0xc0007e1c28?, 0x123465b8?, 0x90?, 0x100000c798fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc0007e1be8?, 0xc794ac5?, 0x90?, 0xfc018e0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc0000357a0?, 0xc0007e1c1c, 0xc0014af2f0?, 0xc000a80130?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc00223f180)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0xc7df419?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0019c2c00)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0019c2c00)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0009ccb60, 0xc0019c2c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0xfcd3ee8?, 0xc0000356c0?}, 0xc0009ccb60, {0xc001fc69c0?, 0x33a5dce0?}, {0x33a5dce00155ff58?, 0xc00155ff60?}, {0xc8d0b13?, 0xc8312af?}, {0xc001934900, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0009ccb60)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc0009ccb60, 0xc002024a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3963
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                    

Test pass (188/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.58
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 0.29
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.26
12 TestDownloadOnly/v1.31.2/json-events 8.35
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.31
18 TestDownloadOnly/v1.31.2/DeleteAll 0.26
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 1.2
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 338.6
29 TestAddons/serial/Volcano 40.46
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.57
35 TestAddons/parallel/Registry 15.76
36 TestAddons/parallel/Ingress 21.37
37 TestAddons/parallel/InspektorGadget 11.53
38 TestAddons/parallel/MetricsServer 5.53
40 TestAddons/parallel/CSI 61.94
41 TestAddons/parallel/Headlamp 19.45
42 TestAddons/parallel/CloudSpanner 5.39
43 TestAddons/parallel/LocalPath 52.66
44 TestAddons/parallel/NvidiaDevicePlugin 5.36
45 TestAddons/parallel/Yakd 11.74
47 TestAddons/StoppedEnableDisable 6.02
55 TestHyperKitDriverInstallOrUpdate 9.01
58 TestErrorSpam/setup 40.3
59 TestErrorSpam/start 1.84
60 TestErrorSpam/status 0.61
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.51
63 TestErrorSpam/stop 153.9
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 216.54
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.93
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 1.44
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
77 TestFunctional/serial/CacheCmd/cache/list 0.09
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
80 TestFunctional/serial/CacheCmd/cache/delete 0.19
81 TestFunctional/serial/MinikubeKubectlCmd 1.21
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.87
83 TestFunctional/serial/ExtraConfig 282.26
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 2.09
86 TestFunctional/serial/LogsFileCmd 2.22
87 TestFunctional/serial/InvalidService 4.03
89 TestFunctional/parallel/ConfigCmd 0.58
90 TestFunctional/parallel/DashboardCmd 12.36
91 TestFunctional/parallel/DryRun 1.08
92 TestFunctional/parallel/InternationalLanguage 0.52
93 TestFunctional/parallel/StatusCmd 0.58
97 TestFunctional/parallel/ServiceCmdConnect 7.61
98 TestFunctional/parallel/AddonsCmd 0.26
99 TestFunctional/parallel/PersistentVolumeClaim 28.4
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.25
103 TestFunctional/parallel/MySQL 25
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.33
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
113 TestFunctional/parallel/License 0.71
114 TestFunctional/parallel/Version/short 0.13
115 TestFunctional/parallel/Version/components 0.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.25
121 TestFunctional/parallel/ImageCommands/Setup 1.83
122 TestFunctional/parallel/DockerEnv/bash 0.73
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.7
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
133 TestFunctional/parallel/ServiceCmd/DeployApp 23.14
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
139 TestFunctional/parallel/ServiceCmd/List 0.4
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
142 TestFunctional/parallel/ServiceCmd/Format 0.28
143 TestFunctional/parallel/ServiceCmd/URL 0.29
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.03
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.05
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.04
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.03
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.15
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
151 TestFunctional/parallel/ProfileCmd/profile_list 0.32
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
153 TestFunctional/parallel/MountCmd/any-port 6.07
154 TestFunctional/parallel/MountCmd/specific-port 1.67
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 204.97
163 TestMultiControlPlane/serial/DeployApp 5.43
164 TestMultiControlPlane/serial/PingHostFromPods 1.4
165 TestMultiControlPlane/serial/AddWorkerNode 167.58
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
168 TestMultiControlPlane/serial/CopyFile 10.44
169 TestMultiControlPlane/serial/StopSecondaryNode 8.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.44
171 TestMultiControlPlane/serial/RestartSecondaryNode 41.65
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
176 TestMultiControlPlane/serial/StopCluster 24.98
177 TestMultiControlPlane/serial/RestartCluster 163.22
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
179 TestMultiControlPlane/serial/AddSecondaryNode 75.45
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
183 TestImageBuild/serial/Setup 37.91
184 TestImageBuild/serial/NormalBuild 1.82
185 TestImageBuild/serial/BuildWithBuildArg 0.73
186 TestImageBuild/serial/BuildWithDockerIgnore 0.52
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
191 TestJSONOutput/start/Command 75.73
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.53
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.46
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 8.36
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.67
219 TestMainNoArgs 0.09
220 TestMinikubeProfile 90.51
226 TestMultiNode/serial/FreshStart2Nodes 112.55
227 TestMultiNode/serial/DeployApp2Nodes 4.79
228 TestMultiNode/serial/PingHostFrom2Pods 0.95
229 TestMultiNode/serial/AddNode 48.88
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.4
232 TestMultiNode/serial/CopyFile 6.1
233 TestMultiNode/serial/StopNode 2.94
234 TestMultiNode/serial/StartAfterStop 36.7
235 TestMultiNode/serial/RestartKeepsNodes 174.95
236 TestMultiNode/serial/DeleteNode 3.4
237 TestMultiNode/serial/StopMultiNode 16.86
238 TestMultiNode/serial/RestartMultiNode 107.24
239 TestMultiNode/serial/ValidateNameConflict 42.06
243 TestPreload 160.99
246 TestSkaffold 115.76
249 TestRunningBinaryUpgrade 105.92
251 TestKubernetesUpgrade 1332.54
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.16
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.1
266 TestStoppedBinaryUpgrade/Setup 1.63
267 TestStoppedBinaryUpgrade/Upgrade 123.3
270 TestStoppedBinaryUpgrade/MinikubeLogs 2.07
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.58
280 TestNoKubernetes/serial/StartWithK8s 186.52
289 TestNoKubernetes/serial/StartWithStopK8s 7.65
292 TestNoKubernetes/serial/Start 19.35
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
297 TestNoKubernetes/serial/ProfileList 0.69
298 TestNoKubernetes/serial/Stop 2.43
299 TestNoKubernetes/serial/StartNoArgs 19.34
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (19.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (19.581490444s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1213 11:02:51.559310    1796 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1213 11:02:51.559561    1796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-557000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-557000: exit status 85 (306.875022ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.34.0 | 13 Dec 24 11:02 PST |          |
	|         | -p download-only-557000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 11:02:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:02:32.042722    1799 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:02:32.042945    1799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:02:32.042950    1799 out.go:358] Setting ErrFile to fd 2...
	I1213 11:02:32.042954    1799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:02:32.043122    1799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	W1213 11:02:32.043229    1799 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20090-800/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20090-800/.minikube/config/config.json: no such file or directory
	I1213 11:02:32.045481    1799 out.go:352] Setting JSON to true
	I1213 11:02:32.075363    1799 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":122,"bootTime":1734116430,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:02:32.075530    1799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:02:32.097559    1799 out.go:97] [download-only-557000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:02:32.097703    1799 notify.go:220] Checking for updates...
	W1213 11:02:32.097693    1799 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 11:02:32.120444    1799 out.go:169] MINIKUBE_LOCATION=20090
	I1213 11:02:32.141770    1799 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:02:32.184513    1799 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:02:32.205407    1799 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:02:32.226626    1799 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	W1213 11:02:32.268582    1799 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 11:02:32.269088    1799 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:02:32.327524    1799 out.go:97] Using the hyperkit driver based on user configuration
	I1213 11:02:32.327584    1799 start.go:297] selected driver: hyperkit
	I1213 11:02:32.327600    1799 start.go:901] validating driver "hyperkit" against <nil>
	I1213 11:02:32.327796    1799 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:02:32.328250    1799 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:02:32.723139    1799 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:02:32.730547    1799 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:02:32.730588    1799 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:02:32.730636    1799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 11:02:32.737994    1799 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1213 11:02:32.738770    1799 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 11:02:32.738807    1799 cni.go:84] Creating CNI manager for ""
	I1213 11:02:32.738857    1799 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1213 11:02:32.738925    1799 start.go:340] cluster config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:02:32.739189    1799 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:02:32.759249    1799 out.go:97] Downloading VM boot image ...
	I1213 11:02:32.759316    1799 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/20090-800/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 11:02:41.456331    1799 out.go:97] Starting "download-only-557000" primary control-plane node in "download-only-557000" cluster
	I1213 11:02:41.456394    1799 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1213 11:02:41.510915    1799 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1213 11:02:41.510956    1799 cache.go:56] Caching tarball of preloaded images
	I1213 11:02:41.511376    1799 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1213 11:02:41.531482    1799 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1213 11:02:41.531511    1799 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1213 11:02:41.623808    1799 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-557000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-557000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-593000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-593000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit : (8.353032264s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1213 11:03:00.766873    1796 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1213 11:03:00.766922    1796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-593000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-593000: exit status 85 (307.427792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.34.0 | 13 Dec 24 11:02 PST |                     |
	|         | -p download-only-557000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Dec 24 11:02 PST | 13 Dec 24 11:02 PST |
	| delete  | -p download-only-557000        | download-only-557000 | jenkins | v1.34.0 | 13 Dec 24 11:02 PST | 13 Dec 24 11:02 PST |
	| start   | -o=json --download-only        | download-only-593000 | jenkins | v1.34.0 | 13 Dec 24 11:02 PST |                     |
	|         | -p download-only-593000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 11:02:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:02:52.488055    1857 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:02:52.488364    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:02:52.488370    1857 out.go:358] Setting ErrFile to fd 2...
	I1213 11:02:52.488374    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:02:52.488565    1857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:02:52.490439    1857 out.go:352] Setting JSON to true
	I1213 11:02:52.523323    1857 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":142,"bootTime":1734116430,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:02:52.523476    1857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:02:52.544648    1857 out.go:97] [download-only-593000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:02:52.544773    1857 notify.go:220] Checking for updates...
	I1213 11:02:52.565374    1857 out.go:169] MINIKUBE_LOCATION=20090
	I1213 11:02:52.586558    1857 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:02:52.607603    1857 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:02:52.628438    1857 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:02:52.649550    1857 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	W1213 11:02:52.691314    1857 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 11:02:52.691563    1857 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:02:52.723586    1857 out.go:97] Using the hyperkit driver based on user configuration
	I1213 11:02:52.723615    1857 start.go:297] selected driver: hyperkit
	I1213 11:02:52.723643    1857 start.go:901] validating driver "hyperkit" against <nil>
	I1213 11:02:52.723766    1857 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:02:52.723930    1857 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20090-800/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1213 11:02:52.736044    1857 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1213 11:02:52.743239    1857 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:02:52.743263    1857 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1213 11:02:52.743297    1857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 11:02:52.749394    1857 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1213 11:02:52.749566    1857 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 11:02:52.749600    1857 cni.go:84] Creating CNI manager for ""
	I1213 11:02:52.749650    1857 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 11:02:52.749660    1857 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 11:02:52.749731    1857 start.go:340] cluster config:
	{Name:download-only-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:02:52.749836    1857 iso.go:125] acquiring lock: {Name:mke3ec926417a11c6d5b1356d2702df4068fa1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:02:52.770550    1857 out.go:97] Starting "download-only-593000" primary control-plane node in "download-only-593000" cluster
	I1213 11:02:52.770571    1857 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:02:52.822301    1857 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1213 11:02:52.822364    1857 cache.go:56] Caching tarball of preloaded images
	I1213 11:02:52.822607    1857 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1213 11:02:52.843351    1857 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1213 11:02:52.843366    1857 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1213 11:02:52.922256    1857 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4?checksum=md5:979f32540b837894423b337fec69fbf6 -> /Users/jenkins/minikube-integration/20090-800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-593000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-593000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-593000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (1.2s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 11:03:02.062216    1796 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-556000 --alsologtostderr --binary-mirror http://127.0.0.1:49540 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-556000
--- PASS: TestBinaryMirror (1.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-723000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-723000: exit status 85 (201.584194ms)

                                                
                                                
-- stdout --
	* Profile "addons-723000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-723000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-723000: exit status 85 (222.311225ms)

                                                
                                                
-- stdout --
	* Profile "addons-723000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (338.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-723000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-723000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (5m38.604460234s)
--- PASS: TestAddons/Setup (338.60s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.46s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 12.462931ms
addons_test.go:807: volcano-scheduler stabilized in 12.493286ms
addons_test.go:823: volcano-controller stabilized in 12.517678ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-v4qkg" [2792be92-0014-43cb-bb79-0c95a3a8bed5] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003027317s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-8jx7m" [352b8e76-af2a-4a79-87c0-814a80b79457] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003096056s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-skdnh" [0b4b8071-fd43-45bc-81b6-5c31e064b181] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00335817s
addons_test.go:842: (dbg) Run:  kubectl --context addons-723000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-723000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-723000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [207ea22e-a137-4ae8-a412-d6ec9363358c] Pending
helpers_test.go:344: "test-job-nginx-0" [207ea22e-a137-4ae8-a412-d6ec9363358c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [207ea22e-a137-4ae8-a412-d6ec9363358c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003145804s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable volcano --alsologtostderr -v=1: (11.143190108s)
--- PASS: TestAddons/serial/Volcano (40.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-723000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-723000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-723000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-723000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f483102-38d6-40ef-8c01-7e760356191c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f483102-38d6-40ef-8c01-7e760356191c] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004107005s
addons_test.go:633: (dbg) Run:  kubectl --context addons-723000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-723000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-723000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-723000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.554917ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-xp22d" [df79a932-2eaa-4013-ae26-a1e39185d97f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.050166644s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cn5tr" [def43942-a965-4030-8241-96de5f7facde] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003837263s
addons_test.go:331: (dbg) Run:  kubectl --context addons-723000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-723000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-723000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.029716425s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 ip
2024/12/13 11:09:56 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-723000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-723000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-723000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eaa0410b-406d-47ce-9789-e588bd5eff16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eaa0410b-406d-47ce-9789-e588bd5eff16] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004709448s
I1213 11:11:22.263672    1796 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-723000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable ingress --alsologtostderr -v=1: (7.511259873s)
--- PASS: TestAddons/parallel/Ingress (21.37s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.53s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zg8p7" [67df11a5-1ab4-4e61-b2fe-16f7c63b94b0] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002935987s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.522858857s)
--- PASS: TestAddons/parallel/InspektorGadget (11.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.147808ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zk6pb" [11634d76-6fec-4bbe-9cbf-57113eee707a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003277653s
addons_test.go:402: (dbg) Run:  kubectl --context addons-723000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 11:10:19.271589    1796 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 11:10:19.276273    1796 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 11:10:19.276286    1796 kapi.go:107] duration metric: took 4.706523ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.715419ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-723000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-723000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ef75725b-6bb1-4b13-973e-101cd1fb1e35] Pending
helpers_test.go:344: "task-pv-pod" [ef75725b-6bb1-4b13-973e-101cd1fb1e35] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ef75725b-6bb1-4b13-973e-101cd1fb1e35] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004390179s
addons_test.go:511: (dbg) Run:  kubectl --context addons-723000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-723000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-723000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-723000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-723000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b7c7dc46-0255-45c9-94cd-86053c1e86a1] Pending
helpers_test.go:344: "task-pv-pod-restore" [b7c7dc46-0255-45c9-94cd-86053c1e86a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b7c7dc46-0255-45c9-94cd-86053c1e86a1] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00533225s
addons_test.go:553: (dbg) Run:  kubectl --context addons-723000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-723000 delete pod task-pv-pod-restore: (1.295569583s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-723000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-723000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.528416415s)
--- PASS: TestAddons/parallel/CSI (61.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-723000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-r7xkp" [91696162-cb15-4cec-9164-da5d052d7a8b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-r7xkp" [91696162-cb15-4cec-9164-da5d052d7a8b] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.009195687s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable headlamp --alsologtostderr -v=1: (5.488707974s)
--- PASS: TestAddons/parallel/Headlamp (19.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-nkh48" [d1670716-037c-4707-81df-afe9a2a0bf63] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002846406s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-723000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-723000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c632f38-dcf7-4480-9633-44277805039a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c632f38-dcf7-4480-9633-44277805039a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c632f38-dcf7-4480-9633-44277805039a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002011067s
addons_test.go:906: (dbg) Run:  kubectl --context addons-723000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 ssh "cat /opt/local-path-provisioner/pvc-b106b4d6-da79-496b-9e47-33a0c2303cb9_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-723000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-723000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.94488827s)
--- PASS: TestAddons/parallel/LocalPath (52.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rnq54" [2aae1527-3aab-4f8b-8e84-34b1dd0acd44] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007572502s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4cpkx" [c0d8c64b-4b57-4413-9d60-459d7316d003] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003768765s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-723000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-723000 addons disable yakd --alsologtostderr -v=1: (5.732911059s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-723000
addons_test.go:170: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-723000: (5.401597217s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-723000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-723000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-723000
--- PASS: TestAddons/StoppedEnableDisable (6.02s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.01s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1213 12:05:39.863142    1796 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 12:05:39.863330    1796 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W1213 12:05:40.659636    1796 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1213 12:05:40.659867    1796 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1213 12:05:40.659925    1796 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit
I1213 12:05:41.152207    1796 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x11624160 0x11624160 0x11624160 0x11624160 0x11624160 0x11624160 0x11624160] Decompressors:map[bz2:0xc000903ca0 gz:0xc000903ca8 tar:0xc000903c50 tar.bz2:0xc000903c60 tar.gz:0xc000903c70 tar.xz:0xc000903c80 tar.zst:0xc000903c90 tbz2:0xc000903c60 tgz:0xc000903c70 txz:0xc000903c80 tzst:0xc000903c90 xz:0xc000903cb0 zip:0xc000903cc0 zst:0xc000903cb8] Getters:map[file:0xc000762c50 http:0xc00074cb90 https:0xc00074cbe0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1213 12:05:41.152243    1796 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit
I1213 12:05:44.719260    1796 install.go:79] stdout: 
W1213 12:05:44.719407    1796 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1213 12:05:44.719436    1796 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit]
I1213 12:05:44.742273    1796 install.go:106] running: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit]
I1213 12:05:44.764077    1796 install.go:99] testing: [sudo -n chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit]
I1213 12:05:44.784853    1796 install.go:106] running: [sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/001/docker-machine-driver-hyperkit]
I1213 12:05:44.826310    1796 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 12:05:44.826438    1796 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I1213 12:05:45.557203    1796 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1213 12:05:45.557227    1796 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1213 12:05:45.557288    1796 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1213 12:05:45.557324    1796 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit
I1213 12:05:45.931418    1796 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x11624160 0x11624160 0x11624160 0x11624160 0x11624160 0x11624160 0x11624160] Decompressors:map[bz2:0xc000903ca0 gz:0xc000903ca8 tar:0xc000903c50 tar.bz2:0xc000903c60 tar.gz:0xc000903c70 tar.xz:0xc000903c80 tar.zst:0xc000903c90 tbz2:0xc000903c60 tgz:0xc000903c70 txz:0xc000903c80 tzst:0xc000903c90 xz:0xc000903cb0 zip:0xc000903cc0 zst:0xc000903cb8] Getters:map[file:0xc0008552d0 http:0xc000598eb0 https:0xc000598f00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1213 12:05:45.931456    1796 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit
I1213 12:05:48.759639    1796 install.go:79] stdout: 
W1213 12:05:48.759789    1796 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1213 12:05:48.759828    1796 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit]
I1213 12:05:48.781576    1796 install.go:106] running: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit]
I1213 12:05:48.803564    1796 install.go:99] testing: [sudo -n chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit]
I1213 12:05:48.824240    1796 install.go:106] running: [sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperKitDriverInstallOrUpdate2547047979/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (9.01s)

                                                
                                    
x
+
TestErrorSpam/setup (40.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-115000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-115000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 --driver=hyperkit : (40.298961754s)
--- PASS: TestErrorSpam/setup (40.30s)

                                                
                                    
x
+
TestErrorSpam/start (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 start --dry-run
--- PASS: TestErrorSpam/start (1.84s)

                                                
                                    
x
+
TestErrorSpam/status (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 status
--- PASS: TestErrorSpam/status (0.61s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (153.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop: (3.424522208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop
E1213 11:13:42.086795    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.094274    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.105903    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.127281    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.170872    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.254448    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:42.416089    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop: (1m15.243152494s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop
E1213 11:13:42.737687    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:43.379151    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:44.660714    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:47.222444    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:13:52.343958    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:14:02.585701    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:14:23.067078    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-115000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-115000 stop: (1m15.226936423s)
--- PASS: TestErrorSpam/stop (153.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20090-800/.minikube/files/etc/test/nested/copy/1796/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (216.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E1213 11:15:04.028236    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:16:25.948811    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-178000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (3m36.543185653s)
--- PASS: TestFunctional/serial/StartWithProxy (216.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 11:18:34.883486    1796 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --alsologtostderr -v=8
E1213 11:18:42.086298    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:19:09.797617    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-178000 --alsologtostderr -v=8: (39.931086256s)
functional_test.go:663: soft start took 39.931498889s for "functional-178000" cluster.
I1213 11:19:14.831448    1796 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (39.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-178000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:3.1: (1.31256007s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:3.3: (1.19770638s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 cache add registry.k8s.io/pause:latest: (1.048267663s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3287675132/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache add minikube-local-cache-test:functional-178000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache delete minikube-local-cache-test:functional-178000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-178000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.815476ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 kubectl -- --context functional-178000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 kubectl -- --context functional-178000 get pods: (1.211297822s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-178000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-178000 get pods: (1.867900544s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.87s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (282.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 11:23:42.122992    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-178000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m42.26259255s)
functional_test.go:761: restart took 4m42.262712812s for "functional-178000" cluster.
I1213 11:24:07.087563    1796 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (282.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-178000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 logs: (2.085830534s)
--- PASS: TestFunctional/serial/LogsCmd (2.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2722453384/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2722453384/001/logs.txt: (2.220496075s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-178000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-178000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-178000: exit status 115 (292.433371ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:31270 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-178000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 config get cpus: exit status 14 (76.466871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 config get cpus: exit status 14 (69.513688ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-178000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-178000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4044: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-178000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (599.662319ms)

                                                
                                                
-- stdout --
	* [functional-178000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:25:15.264542    3996 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:25:15.264849    3996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:15.264855    3996 out.go:358] Setting ErrFile to fd 2...
	I1213 11:25:15.264859    3996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:15.265061    3996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:25:15.266708    3996 out.go:352] Setting JSON to false
	I1213 11:25:15.295763    3996 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1485,"bootTime":1734116430,"procs":605,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:25:15.295899    3996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:25:15.317968    3996 out.go:177] * [functional-178000] minikube v1.34.0 on Darwin 15.1.1
	I1213 11:25:15.359695    3996 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:25:15.359730    3996 notify.go:220] Checking for updates...
	I1213 11:25:15.402520    3996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:25:15.424378    3996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:25:15.445581    3996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:25:15.503286    3996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:25:15.561498    3996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:25:15.582799    3996 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:25:15.583317    3996 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:25:15.583364    3996 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:25:15.595362    3996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50802
	I1213 11:25:15.595730    3996 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:25:15.596156    3996 main.go:141] libmachine: Using API Version  1
	I1213 11:25:15.596170    3996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:25:15.596404    3996 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:25:15.596510    3996 main.go:141] libmachine: (functional-178000) Calling .DriverName
	I1213 11:25:15.596711    3996 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:25:15.596973    3996 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:25:15.596995    3996 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:25:15.608462    3996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50804
	I1213 11:25:15.608839    3996 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:25:15.609197    3996 main.go:141] libmachine: Using API Version  1
	I1213 11:25:15.609217    3996 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:25:15.609423    3996 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:25:15.609541    3996 main.go:141] libmachine: (functional-178000) Calling .DriverName
	I1213 11:25:15.641653    3996 out.go:177] * Using the hyperkit driver based on existing profile
	I1213 11:25:15.683330    3996 start.go:297] selected driver: hyperkit
	I1213 11:25:15.683361    3996 start.go:901] validating driver "hyperkit" against &{Name:functional-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:25:15.683548    3996 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:25:15.711716    3996 out.go:201] 
	W1213 11:25:15.732659    3996 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 11:25:15.753403    3996 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-178000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-178000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (517.848979ms)

                                                
                                                
-- stdout --
	* [functional-178000] minikube v1.34.0 sur Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:25:16.337913    4012 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:25:16.338210    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:16.338216    4012 out.go:358] Setting ErrFile to fd 2...
	I1213 11:25:16.338220    4012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:16.338393    4012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:25:16.339941    4012 out.go:352] Setting JSON to false
	I1213 11:25:16.369252    4012 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1486,"bootTime":1734116430,"procs":606,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.1.1","kernelVersion":"24.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1213 11:25:16.369346    4012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1213 11:25:16.390515    4012 out.go:177] * [functional-178000] minikube v1.34.0 sur Darwin 15.1.1
	I1213 11:25:16.432258    4012 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 11:25:16.432291    4012 notify.go:220] Checking for updates...
	I1213 11:25:16.474283    4012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	I1213 11:25:16.495160    4012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1213 11:25:16.516401    4012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:25:16.537465    4012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	I1213 11:25:16.558425    4012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:25:16.580261    4012 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:25:16.580952    4012 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:25:16.581031    4012 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:25:16.594329    4012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50812
	I1213 11:25:16.594675    4012 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:25:16.595086    4012 main.go:141] libmachine: Using API Version  1
	I1213 11:25:16.595102    4012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:25:16.595355    4012 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:25:16.595468    4012 main.go:141] libmachine: (functional-178000) Calling .DriverName
	I1213 11:25:16.595666    4012 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 11:25:16.595947    4012 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:25:16.595973    4012 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:25:16.607408    4012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50814
	I1213 11:25:16.607740    4012 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:25:16.608109    4012 main.go:141] libmachine: Using API Version  1
	I1213 11:25:16.608128    4012 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:25:16.608368    4012 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:25:16.608475    4012 main.go:141] libmachine: (functional-178000) Calling .DriverName
	I1213 11:25:16.640393    4012 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1213 11:25:16.682487    4012 start.go:297] selected driver: hyperkit
	I1213 11:25:16.682517    4012 start.go:901] validating driver "hyperkit" against &{Name:functional-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:25:16.682726    4012 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:25:16.710236    4012 out.go:201] 
	W1213 11:25:16.732436    4012 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 11:25:16.754226    4012 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-178000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-178000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9d5jv" [af0b37b5-4253-4e53-8602-0403f798ae84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9d5jv" [af0b37b5-4253-4e53-8602-0403f798ae84] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004736826s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.5:30644
functional_test.go:1675: http://192.169.0.5:30644: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9d5jv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:30644
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4d856fbb-bffd-4542-8d04-1cfd3fa81b6e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003581972s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-178000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-178000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-178000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-178000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [efbf65b7-960a-4e54-a17a-009d24adaf24] Pending
helpers_test.go:344: "sp-pod" [efbf65b7-960a-4e54-a17a-009d24adaf24] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [efbf65b7-960a-4e54-a17a-009d24adaf24] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004541522s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-178000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-178000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-178000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd2f1a27-76df-45dd-921e-5993bb5f686d] Pending
helpers_test.go:344: "sp-pod" [dd2f1a27-76df-45dd-921e-5993bb5f686d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd2f1a27-76df-45dd-921e-5993bb5f686d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003760866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-178000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh -n functional-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cp functional-178000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd2016680141/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh -n functional-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh -n functional-178000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-178000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-mx676" [18e2b3ce-36b2-46ef-aad2-470cadab929e] Pending
helpers_test.go:344: "mysql-6cdb49bbb-mx676" [18e2b3ce-36b2-46ef-aad2-470cadab929e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-mx676" [18e2b3ce-36b2-46ef-aad2-470cadab929e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004946706s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-178000 exec mysql-6cdb49bbb-mx676 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-178000 exec mysql-6cdb49bbb-mx676 -- mysql -ppassword -e "show databases;": exit status 1 (141.435599ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 11:24:41.630855    1796 retry.go:31] will retry after 1.311847271s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-178000 exec mysql-6cdb49bbb-mx676 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-178000 exec mysql-6cdb49bbb-mx676 -- mysql -ppassword -e "show databases;": exit status 1 (111.210333ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 11:24:43.055575    1796 retry.go:31] will retry after 1.131535593s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-178000 exec mysql-6cdb49bbb-mx676 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1796/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /etc/test/nested/copy/1796/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1796.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /etc/ssl/certs/1796.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1796.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /usr/share/ca-certificates/1796.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /etc/ssl/certs/17962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /usr/share/ca-certificates/17962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-178000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh "sudo systemctl is-active crio": exit status 1 (174.708499ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-178000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-178000
docker.io/kicbase/echo-server:functional-178000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-178000 image ls --format short --alsologtostderr:
I1213 11:25:18.293903    4045 out.go:345] Setting OutFile to fd 1 ...
I1213 11:25:18.294208    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.294214    4045 out.go:358] Setting ErrFile to fd 2...
I1213 11:25:18.294218    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.294446    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
I1213 11:25:18.295164    4045 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.295286    4045 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.295682    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.295714    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.307450    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50863
I1213 11:25:18.307911    4045 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.308373    4045 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.308383    4045 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.308656    4045 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.308785    4045 main.go:141] libmachine: (functional-178000) Calling .GetState
I1213 11:25:18.308898    4045 main.go:141] libmachine: (functional-178000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1213 11:25:18.308973    4045 main.go:141] libmachine: (functional-178000) DBG | hyperkit pid from json: 2822
I1213 11:25:18.310560    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.310588    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.323330    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50866
I1213 11:25:18.323725    4045 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.324083    4045 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.324097    4045 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.324376    4045 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.324501    4045 main.go:141] libmachine: (functional-178000) Calling .DriverName
I1213 11:25:18.324692    4045 ssh_runner.go:195] Run: systemctl --version
I1213 11:25:18.324712    4045 main.go:141] libmachine: (functional-178000) Calling .GetSSHHostname
I1213 11:25:18.324795    4045 main.go:141] libmachine: (functional-178000) Calling .GetSSHPort
I1213 11:25:18.324898    4045 main.go:141] libmachine: (functional-178000) Calling .GetSSHKeyPath
I1213 11:25:18.325002    4045 main.go:141] libmachine: (functional-178000) Calling .GetSSHUsername
I1213 11:25:18.325110    4045 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/functional-178000/id_rsa Username:docker}
I1213 11:25:18.362288    4045 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1213 11:25:18.384713    4045 main.go:141] libmachine: Making call to close driver server
I1213 11:25:18.384722    4045 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:18.384877    4045 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:18.384889    4045 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:18.384897    4045 main.go:141] libmachine: Making call to close driver server
I1213 11:25:18.384903    4045 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:18.384908    4045 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
I1213 11:25:18.385039    4045 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:18.385051    4045 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:18.385053    4045 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-178000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 66f8bdd3810c9 | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 505d571f5fd56 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-178000 | 27e438ecce555 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 0486b6c53a1b5 | 88.4MB |
| docker.io/kicbase/echo-server               | functional-178000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 91ca84b4f5779 | 52.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.2           | 847c7bc1a5418 | 67.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| localhost/my-image                          | functional-178000 | 0c8db6ab4a29e | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.31.2           | 9499c9960544e | 94.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-178000 image ls --format table --alsologtostderr:
I1213 11:25:21.132799    4073 out.go:345] Setting OutFile to fd 1 ...
I1213 11:25:21.133084    4073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:21.133092    4073 out.go:358] Setting ErrFile to fd 2...
I1213 11:25:21.133096    4073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:21.133325    4073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
I1213 11:25:21.134132    4073 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:21.134270    4073 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:21.134731    4073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:21.134780    4073 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:21.148880    4073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50904
I1213 11:25:21.149416    4073 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:21.149947    4073 main.go:141] libmachine: Using API Version  1
I1213 11:25:21.149959    4073 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:21.150268    4073 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:21.150484    4073 main.go:141] libmachine: (functional-178000) Calling .GetState
I1213 11:25:21.150635    4073 main.go:141] libmachine: (functional-178000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1213 11:25:21.150777    4073 main.go:141] libmachine: (functional-178000) DBG | hyperkit pid from json: 2822
I1213 11:25:21.152669    4073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:21.152707    4073 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:21.166002    4073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50906
I1213 11:25:21.166381    4073 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:21.166779    4073 main.go:141] libmachine: Using API Version  1
I1213 11:25:21.166798    4073 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:21.167026    4073 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:21.167144    4073 main.go:141] libmachine: (functional-178000) Calling .DriverName
I1213 11:25:21.167350    4073 ssh_runner.go:195] Run: systemctl --version
I1213 11:25:21.167369    4073 main.go:141] libmachine: (functional-178000) Calling .GetSSHHostname
I1213 11:25:21.167462    4073 main.go:141] libmachine: (functional-178000) Calling .GetSSHPort
I1213 11:25:21.167561    4073 main.go:141] libmachine: (functional-178000) Calling .GetSSHKeyPath
I1213 11:25:21.167668    4073 main.go:141] libmachine: (functional-178000) Calling .GetSSHUsername
I1213 11:25:21.167780    4073 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/functional-178000/id_rsa Username:docker}
I1213 11:25:21.202327    4073 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1213 11:25:21.221418    4073 main.go:141] libmachine: Making call to close driver server
I1213 11:25:21.221428    4073 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:21.221585    4073 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:21.221592    4073 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:21.221597    4073 main.go:141] libmachine: Making call to close driver server
I1213 11:25:21.221601    4073 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:21.221736    4073 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:21.221746    4073 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:21.221755    4073 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
2024/12/13 11:25:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-178000 image ls --format json --alsologtostderr:
[{"id":"0c8db6ab4a29eaae4a87f69051270369a1507d4f119e86d7763881580d501a5f","repoDigests":[],"repoTags":["localhost/my-image:functional-178000"],"size":"1240000"},{"id":"27e438ecce555319be6c5cbee88e00b29dea25344f5a24629cf4814118bb75aa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-178000"],"size":"30"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa63
69f8545ab0854f9d62b44503","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"88400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"94200000"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67400000"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"91500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":[],"repoTags":["docker.io/libra
ry/nginx:latest"],"size":"192000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-178000"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-178000 image ls --format json --alsologtostderr:
I1213 11:25:20.944080    4069 out.go:345] Setting OutFile to fd 1 ...
I1213 11:25:20.944448    4069 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:20.944455    4069 out.go:358] Setting ErrFile to fd 2...
I1213 11:25:20.944459    4069 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:20.944633    4069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
I1213 11:25:20.945334    4069 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:20.945438    4069 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:20.945819    4069 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:20.945891    4069 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:20.958351    4069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50899
I1213 11:25:20.958743    4069 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:20.959257    4069 main.go:141] libmachine: Using API Version  1
I1213 11:25:20.959290    4069 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:20.959521    4069 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:20.959666    4069 main.go:141] libmachine: (functional-178000) Calling .GetState
I1213 11:25:20.959786    4069 main.go:141] libmachine: (functional-178000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1213 11:25:20.959848    4069 main.go:141] libmachine: (functional-178000) DBG | hyperkit pid from json: 2822
I1213 11:25:20.961553    4069 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:20.961583    4069 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:20.974606    4069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50901
I1213 11:25:20.974975    4069 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:20.975385    4069 main.go:141] libmachine: Using API Version  1
I1213 11:25:20.975404    4069 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:20.975693    4069 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:20.975820    4069 main.go:141] libmachine: (functional-178000) Calling .DriverName
I1213 11:25:20.976027    4069 ssh_runner.go:195] Run: systemctl --version
I1213 11:25:20.976049    4069 main.go:141] libmachine: (functional-178000) Calling .GetSSHHostname
I1213 11:25:20.976163    4069 main.go:141] libmachine: (functional-178000) Calling .GetSSHPort
I1213 11:25:20.976271    4069 main.go:141] libmachine: (functional-178000) Calling .GetSSHKeyPath
I1213 11:25:20.976369    4069 main.go:141] libmachine: (functional-178000) Calling .GetSSHUsername
I1213 11:25:20.976464    4069 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/functional-178000/id_rsa Username:docker}
I1213 11:25:21.011550    4069 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1213 11:25:21.034536    4069 main.go:141] libmachine: Making call to close driver server
I1213 11:25:21.034546    4069 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:21.034686    4069 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
I1213 11:25:21.034713    4069 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:21.034729    4069 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:21.034739    4069 main.go:141] libmachine: Making call to close driver server
I1213 11:25:21.034744    4069 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:21.034866    4069 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:21.034874    4069 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:21.034878    4069 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-178000 image ls --format yaml --alsologtostderr:
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-178000
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 27e438ecce555319be6c5cbee88e00b29dea25344f5a24629cf4814118bb75aa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-178000
size: "30"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "94200000"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "88400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-178000 image ls --format yaml --alsologtostderr:
I1213 11:25:18.481211    4050 out.go:345] Setting OutFile to fd 1 ...
I1213 11:25:18.481444    4050 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.481450    4050 out.go:358] Setting ErrFile to fd 2...
I1213 11:25:18.481453    4050 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.481636    4050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
I1213 11:25:18.482300    4050 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.482396    4050 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.482753    4050 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.482792    4050 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.494494    4050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50871
I1213 11:25:18.494890    4050 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.495314    4050 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.495330    4050 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.495598    4050 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.495722    4050 main.go:141] libmachine: (functional-178000) Calling .GetState
I1213 11:25:18.495828    4050 main.go:141] libmachine: (functional-178000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1213 11:25:18.495913    4050 main.go:141] libmachine: (functional-178000) DBG | hyperkit pid from json: 2822
I1213 11:25:18.497483    4050 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.497507    4050 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.510035    4050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50873
I1213 11:25:18.510378    4050 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.510745    4050 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.510760    4050 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.510991    4050 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.511133    4050 main.go:141] libmachine: (functional-178000) Calling .DriverName
I1213 11:25:18.511319    4050 ssh_runner.go:195] Run: systemctl --version
I1213 11:25:18.511336    4050 main.go:141] libmachine: (functional-178000) Calling .GetSSHHostname
I1213 11:25:18.511455    4050 main.go:141] libmachine: (functional-178000) Calling .GetSSHPort
I1213 11:25:18.511546    4050 main.go:141] libmachine: (functional-178000) Calling .GetSSHKeyPath
I1213 11:25:18.511641    4050 main.go:141] libmachine: (functional-178000) Calling .GetSSHUsername
I1213 11:25:18.511741    4050 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/functional-178000/id_rsa Username:docker}
I1213 11:25:18.549946    4050 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1213 11:25:18.580793    4050 main.go:141] libmachine: Making call to close driver server
I1213 11:25:18.580802    4050 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:18.580948    4050 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:18.580957    4050 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:18.580962    4050 main.go:141] libmachine: Making call to close driver server
I1213 11:25:18.580966    4050 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:18.580969    4050 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
I1213 11:25:18.581128    4050 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:18.581140    4050 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:18.581226    4050 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh pgrep buildkitd: exit status 1 (151.986795ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image build -t localhost/my-image:functional-178000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-178000 image build -t localhost/my-image:functional-178000 testdata/build --alsologtostderr: (1.9007993s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-178000 image build -t localhost/my-image:functional-178000 testdata/build --alsologtostderr:
I1213 11:25:18.841418    4061 out.go:345] Setting OutFile to fd 1 ...
I1213 11:25:18.842207    4061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.842214    4061 out.go:358] Setting ErrFile to fd 2...
I1213 11:25:18.842218    4061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 11:25:18.842395    4061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
I1213 11:25:18.843061    4061 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.843951    4061 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1213 11:25:18.844313    4061 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.844357    4061 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.855545    4061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50885
I1213 11:25:18.855940    4061 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.856381    4061 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.856389    4061 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.856669    4061 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.856797    4061 main.go:141] libmachine: (functional-178000) Calling .GetState
I1213 11:25:18.856904    4061 main.go:141] libmachine: (functional-178000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1213 11:25:18.856969    4061 main.go:141] libmachine: (functional-178000) DBG | hyperkit pid from json: 2822
I1213 11:25:18.858485    4061 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1213 11:25:18.858507    4061 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1213 11:25:18.869831    4061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50887
I1213 11:25:18.870312    4061 main.go:141] libmachine: () Calling .GetVersion
I1213 11:25:18.870685    4061 main.go:141] libmachine: Using API Version  1
I1213 11:25:18.870698    4061 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 11:25:18.870942    4061 main.go:141] libmachine: () Calling .GetMachineName
I1213 11:25:18.871073    4061 main.go:141] libmachine: (functional-178000) Calling .DriverName
I1213 11:25:18.871278    4061 ssh_runner.go:195] Run: systemctl --version
I1213 11:25:18.871296    4061 main.go:141] libmachine: (functional-178000) Calling .GetSSHHostname
I1213 11:25:18.871406    4061 main.go:141] libmachine: (functional-178000) Calling .GetSSHPort
I1213 11:25:18.871497    4061 main.go:141] libmachine: (functional-178000) Calling .GetSSHKeyPath
I1213 11:25:18.871597    4061 main.go:141] libmachine: (functional-178000) Calling .GetSSHUsername
I1213 11:25:18.871700    4061 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/functional-178000/id_rsa Username:docker}
I1213 11:25:18.905333    4061 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3571420193.tar
I1213 11:25:18.905424    4061 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 11:25:18.912865    4061 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3571420193.tar
I1213 11:25:18.916328    4061 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3571420193.tar: stat -c "%s %y" /var/lib/minikube/build/build.3571420193.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3571420193.tar': No such file or directory
I1213 11:25:18.916357    4061 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3571420193.tar --> /var/lib/minikube/build/build.3571420193.tar (3072 bytes)
I1213 11:25:18.937706    4061 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3571420193
I1213 11:25:18.947796    4061 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3571420193 -xf /var/lib/minikube/build/build.3571420193.tar
I1213 11:25:18.955862    4061 docker.go:360] Building image: /var/lib/minikube/build/build.3571420193
I1213 11:25:18.955937    4061 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-178000 /var/lib/minikube/build/build.3571420193
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0c8db6ab4a29eaae4a87f69051270369a1507d4f119e86d7763881580d501a5f done
#8 naming to localhost/my-image:functional-178000 done
#8 DONE 0.0s
I1213 11:25:20.628345    4061 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-178000 /var/lib/minikube/build/build.3571420193: (1.672406118s)
I1213 11:25:20.628431    4061 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3571420193
I1213 11:25:20.638456    4061 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3571420193.tar
I1213 11:25:20.646940    4061 build_images.go:217] Built localhost/my-image:functional-178000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3571420193.tar
I1213 11:25:20.646968    4061 build_images.go:133] succeeded building to: functional-178000
I1213 11:25:20.646973    4061 build_images.go:134] failed building to: 
I1213 11:25:20.646988    4061 main.go:141] libmachine: Making call to close driver server
I1213 11:25:20.646995    4061 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:20.647163    4061 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:20.647174    4061 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:20.647180    4061 main.go:141] libmachine: Making call to close driver server
I1213 11:25:20.647185    4061 main.go:141] libmachine: (functional-178000) Calling .Close
I1213 11:25:20.647202    4061 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
I1213 11:25:20.647340    4061 main.go:141] libmachine: Successfully made call to close driver server
I1213 11:25:20.647350    4061 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 11:25:20.647366    4061 main.go:141] libmachine: (functional-178000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.804341132s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-178000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-178000 docker-env) && out/minikube-darwin-amd64 status -p functional-178000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-178000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image load --daemon kicbase/echo-server:functional-178000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image load --daemon kicbase/echo-server:functional-178000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-178000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image load --daemon kicbase/echo-server:functional-178000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image save kicbase/echo-server:functional-178000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image rm kicbase/echo-server:functional-178000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-178000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 image save --daemon kicbase/echo-server:functional-178000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-178000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-178000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-178000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-vjhjv" [2c70d352-6274-45d6-bf03-9ed897a3bca5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-vjhjv" [2c70d352-6274-45d6-bf03-9ed897a3bca5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.005120763s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3728: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-178000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dd95a2e1-bf54-4497-9bf1-b7f1fe1767ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dd95a2e1-bf54-4497-9bf1-b7f1fe1767ec] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003890439s
I1213 11:24:56.243287    1796 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service list -o json
functional_test.go:1494: Took "396.148237ms" to run "out/minikube-darwin-amd64 -p functional-178000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.5:32173
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.5:32173
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-178000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.101.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1213 11:24:56.349558    1796 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1213 11:24:56.437150    1796 config.go:182] Loaded profile config "functional-178000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-178000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "230.702126ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "93.840498ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "234.498798ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "92.63177ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1278490317/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734117906019860000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1278490317/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734117906019860000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1278490317/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734117906019860000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1278490317/001/test-1734117906019860000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.288883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:25:06.194214    1796 retry.go:31] will retry after 340.228413ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 19:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 19:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 19:25 test-1734117906019860000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh cat /mount-9p/test-1734117906019860000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-178000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [70d8c5fc-c5e8-48ef-8841-5e181433ca61] Pending
helpers_test.go:344: "busybox-mount" [70d8c5fc-c5e8-48ef-8841-5e181433ca61] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [70d8c5fc-c5e8-48ef-8841-5e181433ca61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [70d8c5fc-c5e8-48ef-8841-5e181433ca61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00454629s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-178000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1278490317/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1827782746/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (177.395241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:25:12.272451    1796 retry.go:31] will retry after 528.319831ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1827782746/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh "sudo umount -f /mount-9p": exit status 1 (150.281691ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-178000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1827782746/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T" /mount1: exit status 1 (178.80915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:25:13.945255    1796 retry.go:31] will retry after 314.954259ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-178000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-178000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-178000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3885664538/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-178000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-178000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-178000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-224000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E1213 11:28:42.121082    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-224000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m24.553583631s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-224000 -- rollout status deployment/busybox: (2.944038436s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-7vlsm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-l97s5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-wbknx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-7vlsm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-l97s5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-wbknx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-7vlsm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-l97s5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-wbknx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-7vlsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-7vlsm -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-l97s5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-l97s5 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-wbknx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-224000 -- exec busybox-7dff88458-wbknx -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (167.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-224000 -v=7 --alsologtostderr
E1213 11:29:19.482655    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.489504    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.502026    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.523812    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.565392    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.647744    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:19.809169    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:20.130814    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:20.772775    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:22.054313    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:24.615946    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:29.738916    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:29:39.980583    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:30:00.462940    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:30:05.187076    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:30:41.424061    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-224000 -v=7 --alsologtostderr: (2m47.081330066s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (167.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-224000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp testdata/cp-test.txt ha-224000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000:/home/docker/cp-test.txt ha-224000-m02:/home/docker/cp-test_ha-224000_ha-224000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test_ha-224000_ha-224000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000:/home/docker/cp-test.txt ha-224000-m03:/home/docker/cp-test_ha-224000_ha-224000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test_ha-224000_ha-224000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000:/home/docker/cp-test.txt ha-224000-m04:/home/docker/cp-test_ha-224000_ha-224000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test_ha-224000_ha-224000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp testdata/cp-test.txt ha-224000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m02:/home/docker/cp-test.txt ha-224000:/home/docker/cp-test_ha-224000-m02_ha-224000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test_ha-224000-m02_ha-224000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m02:/home/docker/cp-test.txt ha-224000-m03:/home/docker/cp-test_ha-224000-m02_ha-224000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test_ha-224000-m02_ha-224000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m02:/home/docker/cp-test.txt ha-224000-m04:/home/docker/cp-test_ha-224000-m02_ha-224000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test_ha-224000-m02_ha-224000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp testdata/cp-test.txt ha-224000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt ha-224000:/home/docker/cp-test_ha-224000-m03_ha-224000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test_ha-224000-m03_ha-224000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt ha-224000-m02:/home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test_ha-224000-m03_ha-224000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m03:/home/docker/cp-test.txt ha-224000-m04:/home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test_ha-224000-m03_ha-224000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp testdata/cp-test.txt ha-224000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1762227409/001/cp-test_ha-224000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt ha-224000:/home/docker/cp-test_ha-224000-m04_ha-224000.txt
E1213 11:32:03.345758    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000 "sudo cat /home/docker/cp-test_ha-224000-m04_ha-224000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt ha-224000-m02:/home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m02 "sudo cat /home/docker/cp-test_ha-224000-m04_ha-224000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 cp ha-224000-m04:/home/docker/cp-test.txt ha-224000-m03:/home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 ssh -n ha-224000-m03 "sudo cat /home/docker/cp-test_ha-224000-m04_ha-224000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 node stop m02 -v=7 --alsologtostderr: (8.355981479s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr: exit status 7 (394.543499ms)

                                                
                                                
-- stdout --
	ha-224000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-224000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-224000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-224000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:32:13.367282    5113 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:32:13.367554    5113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:13.367561    5113 out.go:358] Setting ErrFile to fd 2...
	I1213 11:32:13.367564    5113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:13.367744    5113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:32:13.367942    5113 out.go:352] Setting JSON to false
	I1213 11:32:13.367964    5113 mustload.go:65] Loading cluster: ha-224000
	I1213 11:32:13.368016    5113 notify.go:220] Checking for updates...
	I1213 11:32:13.368362    5113 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:32:13.368384    5113 status.go:174] checking status of ha-224000 ...
	I1213 11:32:13.368825    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.368873    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.380815    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51654
	I1213 11:32:13.381158    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.381568    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.381578    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.381788    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.381911    5113 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:32:13.382001    5113 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:32:13.382087    5113 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 4112
	I1213 11:32:13.383321    5113 status.go:371] ha-224000 host status = "Running" (err=<nil>)
	I1213 11:32:13.383337    5113 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:32:13.383596    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.383619    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.397438    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51656
	I1213 11:32:13.397770    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.398096    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.398106    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.398326    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.398423    5113 main.go:141] libmachine: (ha-224000) Calling .GetIP
	I1213 11:32:13.398521    5113 host.go:66] Checking if "ha-224000" exists ...
	I1213 11:32:13.398782    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.398813    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.410181    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51658
	I1213 11:32:13.410484    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.410793    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.410802    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.411001    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.411107    5113 main.go:141] libmachine: (ha-224000) Calling .DriverName
	I1213 11:32:13.411267    5113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:32:13.411287    5113 main.go:141] libmachine: (ha-224000) Calling .GetSSHHostname
	I1213 11:32:13.411372    5113 main.go:141] libmachine: (ha-224000) Calling .GetSSHPort
	I1213 11:32:13.411455    5113 main.go:141] libmachine: (ha-224000) Calling .GetSSHKeyPath
	I1213 11:32:13.411529    5113 main.go:141] libmachine: (ha-224000) Calling .GetSSHUsername
	I1213 11:32:13.411614    5113 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000/id_rsa Username:docker}
	I1213 11:32:13.445469    5113 ssh_runner.go:195] Run: systemctl --version
	I1213 11:32:13.449741    5113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:32:13.460579    5113 kubeconfig.go:125] found "ha-224000" server: "https://192.169.0.254:8443"
	I1213 11:32:13.460603    5113 api_server.go:166] Checking apiserver status ...
	I1213 11:32:13.460659    5113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:32:13.471844    5113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W1213 11:32:13.479506    5113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:32:13.479567    5113 ssh_runner.go:195] Run: ls
	I1213 11:32:13.484065    5113 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1213 11:32:13.487232    5113 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1213 11:32:13.487243    5113 status.go:463] ha-224000 apiserver status = Running (err=<nil>)
	I1213 11:32:13.487250    5113 status.go:176] ha-224000 status: &{Name:ha-224000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:32:13.487263    5113 status.go:174] checking status of ha-224000-m02 ...
	I1213 11:32:13.487543    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.487566    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.499181    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51662
	I1213 11:32:13.499518    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.499879    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.499896    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.500136    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.500244    5113 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:32:13.500331    5113 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:32:13.500400    5113 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 4150
	I1213 11:32:13.501602    5113 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 4150 missing from process table
	I1213 11:32:13.501635    5113 status.go:371] ha-224000-m02 host status = "Stopped" (err=<nil>)
	I1213 11:32:13.501641    5113 status.go:384] host is not running, skipping remaining checks
	I1213 11:32:13.501645    5113 status.go:176] ha-224000-m02 status: &{Name:ha-224000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:32:13.501655    5113 status.go:174] checking status of ha-224000-m03 ...
	I1213 11:32:13.501942    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.501966    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.513581    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51664
	I1213 11:32:13.513907    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.514227    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.514238    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.514427    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.514536    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetState
	I1213 11:32:13.514622    5113 main.go:141] libmachine: (ha-224000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:32:13.514702    5113 main.go:141] libmachine: (ha-224000-m03) DBG | hyperkit pid from json: 4216
	I1213 11:32:13.515969    5113 status.go:371] ha-224000-m03 host status = "Running" (err=<nil>)
	I1213 11:32:13.515977    5113 host.go:66] Checking if "ha-224000-m03" exists ...
	I1213 11:32:13.516240    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.516263    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.527738    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51666
	I1213 11:32:13.528048    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.528362    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.528373    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.528587    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.528683    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetIP
	I1213 11:32:13.528794    5113 host.go:66] Checking if "ha-224000-m03" exists ...
	I1213 11:32:13.529062    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.529085    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.540499    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51668
	I1213 11:32:13.540864    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.541177    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.541192    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.541395    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.541532    5113 main.go:141] libmachine: (ha-224000-m03) Calling .DriverName
	I1213 11:32:13.541701    5113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:32:13.541717    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHHostname
	I1213 11:32:13.541806    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHPort
	I1213 11:32:13.541902    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHKeyPath
	I1213 11:32:13.541993    5113 main.go:141] libmachine: (ha-224000-m03) Calling .GetSSHUsername
	I1213 11:32:13.542079    5113 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m03/id_rsa Username:docker}
	I1213 11:32:13.574057    5113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:32:13.585592    5113 kubeconfig.go:125] found "ha-224000" server: "https://192.169.0.254:8443"
	I1213 11:32:13.585607    5113 api_server.go:166] Checking apiserver status ...
	I1213 11:32:13.585663    5113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:32:13.596794    5113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1947/cgroup
	W1213 11:32:13.604150    5113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1947/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:32:13.604211    5113 ssh_runner.go:195] Run: ls
	I1213 11:32:13.607895    5113 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1213 11:32:13.611072    5113 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1213 11:32:13.611083    5113 status.go:463] ha-224000-m03 apiserver status = Running (err=<nil>)
	I1213 11:32:13.611089    5113 status.go:176] ha-224000-m03 status: &{Name:ha-224000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:32:13.611098    5113 status.go:174] checking status of ha-224000-m04 ...
	I1213 11:32:13.611395    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.611416    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.623187    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51672
	I1213 11:32:13.623508    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.623827    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.623837    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.624028    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.624131    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:32:13.624212    5113 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:32:13.624298    5113 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 4360
	I1213 11:32:13.625563    5113 status.go:371] ha-224000-m04 host status = "Running" (err=<nil>)
	I1213 11:32:13.625572    5113 host.go:66] Checking if "ha-224000-m04" exists ...
	I1213 11:32:13.625834    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.625853    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.637198    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51674
	I1213 11:32:13.637549    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.637895    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.637912    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.638148    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.638258    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetIP
	I1213 11:32:13.638371    5113 host.go:66] Checking if "ha-224000-m04" exists ...
	I1213 11:32:13.638640    5113 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:32:13.638665    5113 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:32:13.649950    5113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51676
	I1213 11:32:13.650308    5113 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:32:13.650635    5113 main.go:141] libmachine: Using API Version  1
	I1213 11:32:13.650647    5113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:32:13.650881    5113 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:32:13.650993    5113 main.go:141] libmachine: (ha-224000-m04) Calling .DriverName
	I1213 11:32:13.651144    5113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:32:13.651156    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHHostname
	I1213 11:32:13.651244    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHPort
	I1213 11:32:13.651329    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHKeyPath
	I1213 11:32:13.651415    5113 main.go:141] libmachine: (ha-224000-m04) Calling .GetSSHUsername
	I1213 11:32:13.651512    5113 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/ha-224000-m04/id_rsa Username:docker}
	I1213 11:32:13.680800    5113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:32:13.691783    5113 status.go:176] ha-224000-m04 status: &{Name:ha-224000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 node start m02 -v=7 --alsologtostderr: (41.092961433s)
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-amd64 -p ha-224000 stop -v=7 --alsologtostderr: (24.866142433s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr: exit status 7 (116.751535ms)

                                                
                                                
-- stdout --
	ha-224000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-224000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-224000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:38:26.609086    5565 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:38:26.609401    5565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:38:26.609407    5565 out.go:358] Setting ErrFile to fd 2...
	I1213 11:38:26.609411    5565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:38:26.609586    5565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:38:26.609776    5565 out.go:352] Setting JSON to false
	I1213 11:38:26.609797    5565 mustload.go:65] Loading cluster: ha-224000
	I1213 11:38:26.609849    5565 notify.go:220] Checking for updates...
	I1213 11:38:26.610125    5565 config.go:182] Loaded profile config "ha-224000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:38:26.610148    5565 status.go:174] checking status of ha-224000 ...
	I1213 11:38:26.610568    5565 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:38:26.610614    5565 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:38:26.622631    5565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52087
	I1213 11:38:26.622977    5565 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:38:26.623368    5565 main.go:141] libmachine: Using API Version  1
	I1213 11:38:26.623379    5565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:38:26.623649    5565 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:38:26.623771    5565 main.go:141] libmachine: (ha-224000) Calling .GetState
	I1213 11:38:26.623883    5565 main.go:141] libmachine: (ha-224000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:38:26.623941    5565 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid from json: 5248
	I1213 11:38:26.625068    5565 main.go:141] libmachine: (ha-224000) DBG | hyperkit pid 5248 missing from process table
	I1213 11:38:26.625108    5565 status.go:371] ha-224000 host status = "Stopped" (err=<nil>)
	I1213 11:38:26.625116    5565 status.go:384] host is not running, skipping remaining checks
	I1213 11:38:26.625120    5565 status.go:176] ha-224000 status: &{Name:ha-224000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:38:26.625152    5565 status.go:174] checking status of ha-224000-m02 ...
	I1213 11:38:26.626419    5565 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:38:26.626459    5565 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:38:26.639736    5565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52089
	I1213 11:38:26.640062    5565 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:38:26.640397    5565 main.go:141] libmachine: Using API Version  1
	I1213 11:38:26.640407    5565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:38:26.640629    5565 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:38:26.640750    5565 main.go:141] libmachine: (ha-224000-m02) Calling .GetState
	I1213 11:38:26.640859    5565 main.go:141] libmachine: (ha-224000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:38:26.640937    5565 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid from json: 5263
	I1213 11:38:26.642072    5565 main.go:141] libmachine: (ha-224000-m02) DBG | hyperkit pid 5263 missing from process table
	I1213 11:38:26.642114    5565 status.go:371] ha-224000-m02 host status = "Stopped" (err=<nil>)
	I1213 11:38:26.642121    5565 status.go:384] host is not running, skipping remaining checks
	I1213 11:38:26.642126    5565 status.go:176] ha-224000-m02 status: &{Name:ha-224000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:38:26.642138    5565 status.go:174] checking status of ha-224000-m04 ...
	I1213 11:38:26.642379    5565 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:38:26.642400    5565 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:38:26.653860    5565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52091
	I1213 11:38:26.654160    5565 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:38:26.654496    5565 main.go:141] libmachine: Using API Version  1
	I1213 11:38:26.654508    5565 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:38:26.654726    5565 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:38:26.654832    5565 main.go:141] libmachine: (ha-224000-m04) Calling .GetState
	I1213 11:38:26.654969    5565 main.go:141] libmachine: (ha-224000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:38:26.654996    5565 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid from json: 5375
	I1213 11:38:26.656147    5565 main.go:141] libmachine: (ha-224000-m04) DBG | hyperkit pid 5375 missing from process table
	I1213 11:38:26.656181    5565 status.go:371] ha-224000-m04 host status = "Stopped" (err=<nil>)
	I1213 11:38:26.656189    5565 status.go:384] host is not running, skipping remaining checks
	I1213 11:38:26.656197    5565 status.go:176] ha-224000-m04 status: &{Name:ha-224000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (163.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-224000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E1213 11:38:42.134808    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:39:19.496068    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-darwin-amd64 start -p ha-224000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m42.667478178s)
ha_test.go:568: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (163.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-224000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-224000 --control-plane -v=7 --alsologtostderr: (1m14.951837582s)
ha_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p ha-224000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-732000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-732000 --driver=hyperkit : (37.910454102s)
--- PASS: TestImageBuild/serial/Setup (37.91s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-732000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-732000: (1.821041841s)
--- PASS: TestImageBuild/serial/NormalBuild (1.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-732000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-732000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-732000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-277000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1213 11:43:42.130277    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:44:19.493188    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-277000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m15.731159288s)
--- PASS: TestJSONOutput/start/Command (75.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-277000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-277000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-277000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-277000 --output=json --user=testUser: (8.355235016s)
--- PASS: TestJSONOutput/stop/Command (8.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.67s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-975000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-975000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (382.015067ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"202b70b5-bdc9-48a8-8079-3b4c1c563a99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-975000] minikube v1.34.0 on Darwin 15.1.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09ff2703-1c23-4c22-a5e9-88526e9f4df0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"5e8c2a8f-3883-41cc-acbc-a33c1cb26940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig"}}
	{"specversion":"1.0","id":"98346f38-6b02-4f3b-a13f-f1fc1c875c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b88d4224-4b26-460c-acf4-98b8f3bfdbf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"122e85b7-8fa9-4e44-bc32-908743f1072d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube"}}
	{"specversion":"1.0","id":"326f0f0d-dd45-42cd-bffe-40f81d3fca6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b7d2224-450d-4627-8678-848a6829a2b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-975000
--- PASS: TestErrorJSONOutput (0.67s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (90.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-601000 --driver=hyperkit 
E1213 11:45:42.560511    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-601000 --driver=hyperkit : (40.424159906s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-610000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-610000 --driver=hyperkit : (38.53142488s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-601000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-610000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-610000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-610000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-610000: (5.283354785s)
helpers_test.go:175: Cleaning up "first-601000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-601000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-601000: (5.286273659s)
--- PASS: TestMinikubeProfile (90.51s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-538000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1213 11:49:19.490135    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-538000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m52.262744003s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-538000 -- rollout status deployment/busybox: (2.999419796s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-b4zr5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-ww9zd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-b4zr5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-ww9zd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-b4zr5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-ww9zd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-b4zr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-b4zr5 -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-ww9zd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-538000 -- exec busybox-7dff88458-ww9zd -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-538000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-538000 -v 3 --alsologtostderr: (48.511128619s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-538000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp testdata/cp-test.txt multinode-538000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3503349962/001/cp-test_multinode-538000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000:/home/docker/cp-test.txt multinode-538000-m02:/home/docker/cp-test_multinode-538000_multinode-538000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test_multinode-538000_multinode-538000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000:/home/docker/cp-test.txt multinode-538000-m03:/home/docker/cp-test_multinode-538000_multinode-538000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test_multinode-538000_multinode-538000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp testdata/cp-test.txt multinode-538000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3503349962/001/cp-test_multinode-538000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m02:/home/docker/cp-test.txt multinode-538000:/home/docker/cp-test_multinode-538000-m02_multinode-538000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test_multinode-538000-m02_multinode-538000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m02:/home/docker/cp-test.txt multinode-538000-m03:/home/docker/cp-test_multinode-538000-m02_multinode-538000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test_multinode-538000-m02_multinode-538000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp testdata/cp-test.txt multinode-538000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3503349962/001/cp-test_multinode-538000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m03:/home/docker/cp-test.txt multinode-538000:/home/docker/cp-test_multinode-538000-m03_multinode-538000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000 "sudo cat /home/docker/cp-test_multinode-538000-m03_multinode-538000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 cp multinode-538000-m03:/home/docker/cp-test.txt multinode-538000-m02:/home/docker/cp-test_multinode-538000-m03_multinode-538000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 ssh -n multinode-538000-m02 "sudo cat /home/docker/cp-test_multinode-538000-m03_multinode-538000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-538000 node stop m03: (2.345802269s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-538000 status: exit status 7 (291.68001ms)

                                                
                                                
-- stdout --
	multinode-538000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr: exit status 7 (298.148998ms)

                                                
                                                
-- stdout --
	multinode-538000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:51:52.516516    6433 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:51:52.517260    6433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:52.517277    6433 out.go:358] Setting ErrFile to fd 2...
	I1213 11:51:52.517283    6433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:52.517842    6433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:51:52.518037    6433 out.go:352] Setting JSON to false
	I1213 11:51:52.518061    6433 mustload.go:65] Loading cluster: multinode-538000
	I1213 11:51:52.518100    6433 notify.go:220] Checking for updates...
	I1213 11:51:52.518407    6433 config.go:182] Loaded profile config "multinode-538000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:51:52.518428    6433 status.go:174] checking status of multinode-538000 ...
	I1213 11:51:52.518824    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.518860    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.530522    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53122
	I1213 11:51:52.530828    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.531232    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.531243    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.531499    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.531629    6433 main.go:141] libmachine: (multinode-538000) Calling .GetState
	I1213 11:51:52.531725    6433 main.go:141] libmachine: (multinode-538000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:51:52.531791    6433 main.go:141] libmachine: (multinode-538000) DBG | hyperkit pid from json: 6095
	I1213 11:51:52.533215    6433 status.go:371] multinode-538000 host status = "Running" (err=<nil>)
	I1213 11:51:52.533229    6433 host.go:66] Checking if "multinode-538000" exists ...
	I1213 11:51:52.533491    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.533526    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.547193    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53124
	I1213 11:51:52.547543    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.547852    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.547866    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.548106    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.548204    6433 main.go:141] libmachine: (multinode-538000) Calling .GetIP
	I1213 11:51:52.548299    6433 host.go:66] Checking if "multinode-538000" exists ...
	I1213 11:51:52.548564    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.548586    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.560195    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53126
	I1213 11:51:52.560512    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.560872    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.560885    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.561097    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.561210    6433 main.go:141] libmachine: (multinode-538000) Calling .DriverName
	I1213 11:51:52.561365    6433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:52.561382    6433 main.go:141] libmachine: (multinode-538000) Calling .GetSSHHostname
	I1213 11:51:52.561458    6433 main.go:141] libmachine: (multinode-538000) Calling .GetSSHPort
	I1213 11:51:52.561553    6433 main.go:141] libmachine: (multinode-538000) Calling .GetSSHKeyPath
	I1213 11:51:52.561629    6433 main.go:141] libmachine: (multinode-538000) Calling .GetSSHUsername
	I1213 11:51:52.561718    6433 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/multinode-538000/id_rsa Username:docker}
	I1213 11:51:52.598845    6433 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:52.605251    6433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:51:52.617273    6433 kubeconfig.go:125] found "multinode-538000" server: "https://192.169.0.15:8443"
	I1213 11:51:52.617297    6433 api_server.go:166] Checking apiserver status ...
	I1213 11:51:52.617351    6433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:52.633253    6433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1918/cgroup
	W1213 11:51:52.641466    6433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1918/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:52.641525    6433 ssh_runner.go:195] Run: ls
	I1213 11:51:52.644798    6433 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I1213 11:51:52.647962    6433 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I1213 11:51:52.647973    6433 status.go:463] multinode-538000 apiserver status = Running (err=<nil>)
	I1213 11:51:52.647981    6433 status.go:176] multinode-538000 status: &{Name:multinode-538000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:51:52.647992    6433 status.go:174] checking status of multinode-538000-m02 ...
	I1213 11:51:52.648251    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.648270    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.659938    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53130
	I1213 11:51:52.660411    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.660755    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.660768    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.661029    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.661127    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetState
	I1213 11:51:52.661221    6433 main.go:141] libmachine: (multinode-538000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:51:52.661300    6433 main.go:141] libmachine: (multinode-538000-m02) DBG | hyperkit pid from json: 6125
	I1213 11:51:52.662723    6433 status.go:371] multinode-538000-m02 host status = "Running" (err=<nil>)
	I1213 11:51:52.662731    6433 host.go:66] Checking if "multinode-538000-m02" exists ...
	I1213 11:51:52.663003    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.663037    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.674690    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53132
	I1213 11:51:52.675035    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.675405    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.675419    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.675631    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.675730    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetIP
	I1213 11:51:52.675832    6433 host.go:66] Checking if "multinode-538000-m02" exists ...
	I1213 11:51:52.676121    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.676143    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.687693    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53134
	I1213 11:51:52.688013    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.688318    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.688329    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.688551    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.688656    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .DriverName
	I1213 11:51:52.688799    6433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:52.688810    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetSSHHostname
	I1213 11:51:52.688889    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetSSHPort
	I1213 11:51:52.688968    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetSSHKeyPath
	I1213 11:51:52.689060    6433 main.go:141] libmachine: (multinode-538000-m02) Calling .GetSSHUsername
	I1213 11:51:52.689134    6433 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20090-800/.minikube/machines/multinode-538000-m02/id_rsa Username:docker}
	I1213 11:51:52.720242    6433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:51:52.730498    6433 status.go:176] multinode-538000-m02 status: &{Name:multinode-538000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:51:52.730513    6433 status.go:174] checking status of multinode-538000-m03 ...
	I1213 11:51:52.730798    6433 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:51:52.730821    6433 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:51:52.742641    6433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53137
	I1213 11:51:52.742976    6433 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:51:52.743333    6433 main.go:141] libmachine: Using API Version  1
	I1213 11:51:52.743350    6433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:51:52.743559    6433 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:51:52.743669    6433 main.go:141] libmachine: (multinode-538000-m03) Calling .GetState
	I1213 11:51:52.743760    6433 main.go:141] libmachine: (multinode-538000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:51:52.743834    6433 main.go:141] libmachine: (multinode-538000-m03) DBG | hyperkit pid from json: 6210
	I1213 11:51:52.745209    6433 main.go:141] libmachine: (multinode-538000-m03) DBG | hyperkit pid 6210 missing from process table
	I1213 11:51:52.745240    6433 status.go:371] multinode-538000-m03 host status = "Stopped" (err=<nil>)
	I1213 11:51:52.745254    6433 status.go:384] host is not running, skipping remaining checks
	I1213 11:51:52.745265    6433 status.go:176] multinode-538000-m03 status: &{Name:multinode-538000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.94s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-538000 node start m03 -v=7 --alsologtostderr: (36.283214534s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-538000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-538000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-538000: (18.89409339s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-538000 --wait=true -v=8 --alsologtostderr
E1213 11:53:42.148721    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:54:19.512470    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-538000 --wait=true -v=8 --alsologtostderr: (2m35.909409302s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-538000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-538000 node delete m03: (3.008866012s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-538000 stop: (16.65201953s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-538000 status: exit status 7 (101.134142ms)

                                                
                                                
-- stdout --
	multinode-538000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr: exit status 7 (102.173089ms)

                                                
                                                
-- stdout --
	multinode-538000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:55:44.611608    6613 out.go:345] Setting OutFile to fd 1 ...
	I1213 11:55:44.611849    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:55:44.611855    6613 out.go:358] Setting ErrFile to fd 2...
	I1213 11:55:44.611858    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 11:55:44.612051    6613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20090-800/.minikube/bin
	I1213 11:55:44.612256    6613 out.go:352] Setting JSON to false
	I1213 11:55:44.612277    6613 mustload.go:65] Loading cluster: multinode-538000
	I1213 11:55:44.612316    6613 notify.go:220] Checking for updates...
	I1213 11:55:44.612633    6613 config.go:182] Loaded profile config "multinode-538000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1213 11:55:44.612654    6613 status.go:174] checking status of multinode-538000 ...
	I1213 11:55:44.613058    6613 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:55:44.613099    6613 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:55:44.624918    6613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53367
	I1213 11:55:44.625218    6613 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:55:44.625609    6613 main.go:141] libmachine: Using API Version  1
	I1213 11:55:44.625618    6613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:55:44.625861    6613 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:55:44.625978    6613 main.go:141] libmachine: (multinode-538000) Calling .GetState
	I1213 11:55:44.626083    6613 main.go:141] libmachine: (multinode-538000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:55:44.626149    6613 main.go:141] libmachine: (multinode-538000) DBG | hyperkit pid from json: 6502
	I1213 11:55:44.627289    6613 main.go:141] libmachine: (multinode-538000) DBG | hyperkit pid 6502 missing from process table
	I1213 11:55:44.627329    6613 status.go:371] multinode-538000 host status = "Stopped" (err=<nil>)
	I1213 11:55:44.627337    6613 status.go:384] host is not running, skipping remaining checks
	I1213 11:55:44.627340    6613 status.go:176] multinode-538000 status: &{Name:multinode-538000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:55:44.627359    6613 status.go:174] checking status of multinode-538000-m02 ...
	I1213 11:55:44.627614    6613 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1213 11:55:44.627637    6613 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1213 11:55:44.642426    6613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53369
	I1213 11:55:44.642770    6613 main.go:141] libmachine: () Calling .GetVersion
	I1213 11:55:44.643105    6613 main.go:141] libmachine: Using API Version  1
	I1213 11:55:44.643118    6613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 11:55:44.643329    6613 main.go:141] libmachine: () Calling .GetMachineName
	I1213 11:55:44.643434    6613 main.go:141] libmachine: (multinode-538000-m02) Calling .GetState
	I1213 11:55:44.643534    6613 main.go:141] libmachine: (multinode-538000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1213 11:55:44.643604    6613 main.go:141] libmachine: (multinode-538000-m02) DBG | hyperkit pid from json: 6537
	I1213 11:55:44.644773    6613 main.go:141] libmachine: (multinode-538000-m02) DBG | hyperkit pid 6537 missing from process table
	I1213 11:55:44.644796    6613 status.go:371] multinode-538000-m02 host status = "Stopped" (err=<nil>)
	I1213 11:55:44.644801    6613 status.go:384] host is not running, skipping remaining checks
	I1213 11:55:44.644805    6613 status.go:176] multinode-538000-m02 status: &{Name:multinode-538000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-538000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-538000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m46.852180885s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-538000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-538000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-538000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-538000-m02 --driver=hyperkit : exit status 14 (566.247428ms)

                                                
                                                
-- stdout --
	* [multinode-538000-m02] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-538000-m02' is duplicated with machine name 'multinode-538000-m02' in profile 'multinode-538000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-538000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-538000-m03 --driver=hyperkit : (37.701807986s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-538000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-538000: exit status 80 (288.031033ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-538000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-538000-m03 already exists in multinode-538000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-538000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-538000-m03: (3.431471022s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.06s)

                                                
                                    
x
+
TestPreload (160.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-078000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1213 11:58:42.145082    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 11:59:19.507554    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-078000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m15.029626873s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-078000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-078000 image pull gcr.io/k8s-minikube/busybox: (1.518980207s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-078000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-078000: (8.402594338s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-078000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-078000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m10.589668186s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-078000 image list
helpers_test.go:175: Cleaning up "test-preload-078000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-078000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-078000: (5.270295339s)
--- PASS: TestPreload (160.99s)

                                                
                                    
x
+
TestSkaffold (115.76s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1494498770 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1494498770 version: (1.717575084s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-318000 --memory=2600 --driver=hyperkit 
E1213 12:03:42.141977    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-318000 --memory=2600 --driver=hyperkit : (38.540911228s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1494498770 run --minikube-profile skaffold-318000 --kube-context skaffold-318000 --status-check=true --port-forward=false --interactive=false
E1213 12:04:19.503389    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1494498770 run --minikube-profile skaffold-318000 --kube-context skaffold-318000 --status-check=true --port-forward=false --interactive=false: (57.649823077s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-9486875cc-cvz4s" [f8a97945-7376-4758-b692-32ca3507224b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003585773s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-c7cf798c6-2fjbh" [d7a79686-e677-4d3a-bfdd-1a5417ba7e6f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005691596s
helpers_test.go:175: Cleaning up "skaffold-318000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-318000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-318000: (5.27300196s)
--- PASS: TestSkaffold (115.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (105.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3566132557 start -p running-upgrade-585000 --memory=2200 --vm-driver=hyperkit 
E1213 12:18:42.282065    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:19:02.715668    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3566132557 start -p running-upgrade-585000 --memory=2200 --vm-driver=hyperkit : (1m5.414760186s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-585000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1213 12:19:19.643362    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-585000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (33.853585919s)
helpers_test.go:175: Cleaning up "running-upgrade-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-585000: (5.255561584s)
--- PASS: TestRunningBinaryUpgrade (105.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (1332.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.963016204s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-488000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-488000: (2.416908807s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-488000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-488000 status --format={{.Host}}: exit status 7 (85.991075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1213 12:23:42.283523    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:24:19.648259    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:25:07.055361    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:26:30.133525    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:28:42.288175    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:29:19.651411    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:30:07.059734    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (10m27.710958518s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-488000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (567.569221ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-488000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-488000
	    minikube start -p kubernetes-upgrade-488000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4880002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-488000 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1213 12:33:42.342675    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:34:19.707177    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:35:07.114427    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:35:42.781500    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:36:45.420540    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:38:42.347828    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:39:19.710292    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:40:07.117903    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-488000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (10m44.439836191s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-488000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-488000: (5.296872215s)
--- PASS: TestKubernetesUpgrade (1332.54s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.16s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=20090
- KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2429869930/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2429869930/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2429869930/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2429869930/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.16s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=20090
- KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2678167988/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2678167988/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2678167988/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2678167988/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3650940277 start -p stopped-upgrade-809000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3650940277 start -p stopped-upgrade-809000 --memory=2200 --vm-driver=hyperkit : (43.363577408s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3650940277 -p stopped-upgrade-809000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3650940277 -p stopped-upgrade-809000 stop: (8.25835668s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-809000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1213 12:43:10.199208    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/skaffold-318000/client.crt: no such file or directory" logger="UnhandledError"
E1213 12:43:42.349930    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/addons-723000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-809000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m11.676578745s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-809000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-809000: (2.072809857s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (577.59263ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-477000] minikube v1.34.0 on Darwin 15.1.1
	  - MINIKUBE_LOCATION=20090
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20090-800/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20090-800/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (186.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-477000 --driver=hyperkit 
E1213 12:44:19.712869    1796 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20090-800/.minikube/profiles/functional-178000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-477000 --driver=hyperkit : (3m6.33820377s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-477000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (186.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --driver=hyperkit : (5.065613889s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-477000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-477000 status -o json: exit status 2 (176.454239ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-477000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-477000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-477000: (2.409902259s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-477000 --no-kubernetes --driver=hyperkit : (19.354646951s)
--- PASS: TestNoKubernetes/serial/Start (19.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-477000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-477000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (156.221522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-477000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-477000: (2.425999485s)
--- PASS: TestNoKubernetes/serial/Stop (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-477000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-477000 --driver=hyperkit : (19.343995065s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-477000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-477000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (150.424246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                    

Test skip (20/221)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard